Meetup "Prompt Engineering: How to ask ChatGPT. And: How to do CI/CD in XXL." bei TNG Technology Consulting am 20.07.23

Datum:
05
.
12
.
23
Kategorie:
NEWS & EVENTS

At first glance LLMs like ChatGPT seem to be fully fledged AIs that you can use for anything. But after some use we find that they have their limitations and users need some understanding of the technology to make the most out of it. So David Farago will give us some insight into how to query LLMs best.

Our second talk will provide insights into the intricacies of a large cloud deployment and the lessons learned through the years.


Talk 1: Prompt Engineering for Software Developers

"AI will not take over your job, but people skilled in AI might" [Philip Hodgetts]. But what does that mean for software developers?

The application of large language models (LLMs) and the engineering of their prompts are producing wow effects and innovations on a weekly basis, but mainly for "open-ended question tasks", where the set of answers that are considered correct is relatively large, like the Duolingo conversation partner and AutGPT's web-based research.
For "closed-ended question tasks", which have more restrictive requirements, e.g. the generation of software engineering artifacts like code and tests, much fewer answers are considered correct. Here, LLMs often answer incorrectly, which is a show stopper for many of those tasks.
Due to this problem, many emerging prompt patterns try to improve the correctness and other quality aspects of the LLM's answers.

This talk presents various prompt patterns (e.g. Chain-Of-Thought, external information, self-critique) and other means (e.g. a CoPilot UI) to improve the quality of answers for "closed-ended question tasks".
The main part performs prompt engineering to iteratively improve the generation of unit tests for a given function implementation.
We conclude that the correctness and comprehensiveness of the LLM's answers greatly depend on the prompts, the LLM, and how the LLM is applied.

After this talk, you will know the main weaknesses of LLMs, and how to cope with them. In particular, you will know various prompt patterns and what they can (not) accomplish for "closed-ended question tasks" like the generation of code and tests.

About the speaker:
Dr. David Farago is a developer and entrepreneur at the cutting edge: Doing model-based testing before it was widely accepted, agile software development before it was cool, cloud just because he can and AI because we need people that understand it, he always takes an academic approach in evaluating the newest trends in IT.

Talk 2: Best Practices for Managing Large Infrastructure

In this talk, we will share our experience gained within the last years during which we provided and maintained hundreds of application instances via a self-service portal in cloud infrastructures all around the world. Our customers request to get their own instance of a pre-defined set of third-party applications originating from a pre-cloud era, which usually only provide a manual configuration setup. Who made which change when and why? Usually, this question cannot be answered in such a setup, since each instance is just as individual as a snowflake. With our tooling, we persist the configuration of each snowflake to become reproducible, auditable and more understandable. By introducing not only Infrastructure as Code (IaC) but also Configuration as Code (CaC), we manage to treat any configuration as source code and make use of any cloud provider. We will explain how we utilize various technologies such as Kubernetes, Helm, Terraform, Docker and Crossplane to implement this concept. We will close the talk by sharing our best practices we learned during this journey.

About the speaker:

Lukas' passion for professional software development has been going on for over ten years. After completing his computer science studies, he joined TNG in 2017 and currently works as a Principal Software Consultant. He started his career as a web developer and became an expert in cloud, containerization, infrastructure, and DevOps during his time at TNG. "At first, I thought that with web technology, anything could be achieved. Now I know that without container runtimes, it's nothing."

📅 Donnerstag, 20. Juli 2023, 18:00 Uhr

18:00-18:30 | Doors open

18:30-19:15 | TALK 1: Prompt Engineering for Software Developers

19:15-19:45 | Food & Drinks

19:45-20:30 | TALK 2: Best Practices for Manageing Large Infrastructure

20:30-... | Networking Time

📍 Amalienbadstraße 41a, 76227 Karlsruhe

➡️ Hier geht's zur Anmeldung!

No items found.

Magazin-Beitrag teilen

Nach Oben

NEWS & EVENTS

Meetup "Prompt Engineering: How to ask ChatGPT. And: How to do CI/CD in XXL." bei TNG Technology Consulting am 20.07.23

At first glance LLMs like ChatGPT seem to be fully fledged AIs that you can use for anything. But after some use we find that they have their limitations and users need some understanding of the technology to make the most out of it. So David Farago will give us some insight into how to query LLMs best.

Our second talk will provide insights into the intricacies of a large cloud deployment and the lessons learned through the years.


Talk 1: Prompt Engineering for Software Developers

"AI will not take over your job, but people skilled in AI might" [Philip Hodgetts]. But what does that mean for software developers?

The application of large language models (LLMs) and the engineering of their prompts are producing wow effects and innovations on a weekly basis, but mainly for "open-ended question tasks", where the set of answers that are considered correct is relatively large, like the Duolingo conversation partner and AutGPT's web-based research.
For "closed-ended question tasks", which have more restrictive requirements, e.g. the generation of software engineering artifacts like code and tests, much fewer answers are considered correct. Here, LLMs often answer incorrectly, which is a show stopper for many of those tasks.
Due to this problem, many emerging prompt patterns try to improve the correctness and other quality aspects of the LLM's answers.

This talk presents various prompt patterns (e.g. Chain-Of-Thought, external information, self-critique) and other means (e.g. a CoPilot UI) to improve the quality of answers for "closed-ended question tasks".
The main part performs prompt engineering to iteratively improve the generation of unit tests for a given function implementation.
We conclude that the correctness and comprehensiveness of the LLM's answers greatly depend on the prompts, the LLM, and how the LLM is applied.

After this talk, you will know the main weaknesses of LLMs, and how to cope with them. In particular, you will know various prompt patterns and what they can (not) accomplish for "closed-ended question tasks" like the generation of code and tests.

About the speaker:
Dr. David Farago is a developer and entrepreneur at the cutting edge: Doing model-based testing before it was widely accepted, agile software development before it was cool, cloud just because he can and AI because we need people that understand it, he always takes an academic approach in evaluating the newest trends in IT.

Talk 2: Best Practices for Managing Large Infrastructure

In this talk, we will share our experience gained within the last years during which we provided and maintained hundreds of application instances via a self-service portal in cloud infrastructures all around the world. Our customers request to get their own instance of a pre-defined set of third-party applications originating from a pre-cloud era, which usually only provide a manual configuration setup. Who made which change when and why? Usually, this question cannot be answered in such a setup, since each instance is just as individual as a snowflake. With our tooling, we persist the configuration of each snowflake to become reproducible, auditable and more understandable. By introducing not only Infrastructure as Code (IaC) but also Configuration as Code (CaC), we manage to treat any configuration as source code and make use of any cloud provider. We will explain how we utilize various technologies such as Kubernetes, Helm, Terraform, Docker and Crossplane to implement this concept. We will close the talk by sharing our best practices we learned during this journey.

About the speaker:

Lukas' passion for professional software development has been going on for over ten years. After completing his computer science studies, he joined TNG in 2017 and currently works as a Principal Software Consultant. He started his career as a web developer and became an expert in cloud, containerization, infrastructure, and DevOps during his time at TNG. "At first, I thought that with web technology, anything could be achieved. Now I know that without container runtimes, it's nothing."

📅 Donnerstag, 20. Juli 2023, 18:00 Uhr

18:00-18:30 | Doors open

18:30-19:15 | TALK 1: Prompt Engineering for Software Developers

19:15-19:45 | Food & Drinks

19:45-20:30 | TALK 2: Best Practices for Manageing Large Infrastructure

20:30-... | Networking Time

📍 Amalienbadstraße 41a, 76227 Karlsruhe

➡️ Hier geht's zur Anmeldung!

KEINE NEUIGKEITEN MEHR VERPASSEN!

Abonniere jetzt unsere CAMPUS NEWS und bleibe auf dem Laufenden, was neue Mitglieder
in der #raumfabrikcommunity, die neue Arbeitswelt und Events in der RaumFabrik Durlach betrifft.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.