Rootcause Generator


Prompt Engineering refers to the art of designing effective prompts or requests for models like GPT-3 or GPT-4 to obtain accurate and useful responses. Here are some steps you can follow:

  1. Formulate the question clearly: Make sure your question is clear and complete. The models are powerful, but they are also literal in their interpretation. If your question is ambiguous, the model may have trouble delivering a precise answer.
  2. Provide context: Provide as much context as necessary. If your question builds on prior knowledge or concerns a specific situation, make sure you include this information in your prompt.
  3. Specify the type of result: If there are different possible response formats (such as a list, a sentence, an essay, etc.), it may be helpful to specify the desired format.
  4. Demand specificity: In some cases, you can ask the model to provide specific information or pay attention to certain details.

An exemplary prompt could look like this:

„Assume we have a manufacturing company that has suddenly experienced a significant drop in product quality. There are no obvious changes in the materials or the process, and the staff reports no major changes. Based on this information, please list a series of possible root causes that could cause this drop in product quality, and output each point in the form of a structured list.“

The model’s response could look like this:

  1. Wear and tear on production equipment: The machinery might be worn out and no longer capable of delivering the required quality.
  2. Changes in suppliers: A material supplier may have changed their processes or ingredients, which could impact quality.
  3. Training needs: There could be new employees who have been hired, or a lack of training and education for existing employees.
  4. Environmental factors: Changes in the environment, such as temperature, humidity, or air quality, could influence product quality.
  5. Human error: Mistakes in manual handling or the input of machine parameters could impair quality.
  6. Software errors: There could be a glitch in the control software of the production machines leading to lowered quality.

Please note that this model is not capable of diagnosing specific root causes in real situations, as it does not possess specific knowledge or data about your production process. The points listed are merely general considerations that may be relevant in many production environments.

Prompt Engineering: P-FMEA RootCause Generator (BETA)

Note that no approach guarantees that every answer will be perfect. GPT models are powerful, but they are not omniscient and can make mistakes or overlook important details. It is always important to critically examine the answers and consult additional sources if necessary.

Kontaktformular

Fragen zum Thema?