فهرست مطالب
Software Testing with Generative AI
brief contents
contents
foreword
preface
acknowledgments
about this book
Who should read this book
How this book is organized: A road map
About the code
liveBook discussion forum
about the author
about the cover illustration
Part 1
1 Enhancing testing with large language models
1.1 Recognizing the effect of AI tools on testing and development
1.1.1 Data generation
1.1.2 Automated test building
1.1.3 Test design
1.2 Delivering value with LLMs
1.2.1 A model for delivering value
1.2.2 Using both human and AI abilities
1.2.3 Being skeptical of LLMs
2 Large language models and prompt engineering
2.1 LLMs explained
2.2 Avoiding the risks of using LLMs
2.2.1 Hallucinations
2.2.2 Data provenance
2.2.3 Data privacy
2.3 Improving results with prompt engineering
2.4 Examining the principles of prompt engineering
2.4.1 Principle 1: Write clear and specific instructions
2.4.2 Tactic 1: Use delimiters
2.4.3 Tactic 2: Ask for structured output
2.4.4 Tactic 3: Check for assumptions
2.4.5 Tactic 4: Few-shot prompting
2.4.6 Principle 2: Give the model time to “think”
2.4.7 Tactic 1: Specify the steps to complete the task
2.4.8 Tactic 2: Instruct the model to work out its own solution first
2.5 Working with various LLMs
2.5.1 Comparing LLMs
2.5.2 Examining popular LLMs
2.6 Creating a library of prompts
2.7 Solving problems by using prompts
3 Artificial intelligence, automation, and testing
3.1 The value of testing
3.1.1 A different way of thinking about testing
3.1.2 A holistic approach to testing
3.2 How tools help with testing
3.2.1 Automation bias
3.2.2 Being selective with tooling
3.3 Knowing when to use LLMs in testing
3.3.1 Generative capabilities
3.3.2 Transformation capabilities
3.3.3 Enhancing capabilities
3.3.4 LLMs in use in testing
Part 2
4 AI-assisted testing for developers
4.1 Examining the rise of the automated developer
4.2 Pairing with LLMs
4.2.1 Analyzing ideas
4.2.2 Analyzing code
4.2.3 Recognizing that a simulation is better than nothing at all
4.3 Building in quality with AI assistance
4.4 Creating our first TDD loop with LLMs
4.4.1 Preparing the work
4.4.2 Loop 1: Save a timesheet entry
4.4.3 Loop 2: Retrieve a timesheet entry
4.4.4 Loop 3: Calculating times for a project
4.4.5 Refactoring code
4.5 Improving documentation and communication with LLMs
4.5.1 Generating code comments
4.5.2 Generating release notes
4.6 Maintaining a balance with code assistants
5 Test planning with AI support
5.1 Defining test planning in modern testing
5.1.1 Test planning, LLMs, and area of effect
5.2 Focused prompts with the use of models
5.2.1 Weak prompts mean weak suggestions
5.2.2 What are models and why can they help
5.3 Combining models and LLMs to assist test planning
5.3.1 Creating a model to identify prompts
5.3.2 Experimenting with different model types
5.4 LLMs and test cases
5.4.1 Having a healthy skepticism of generated risks and test cases
6 Rapid data creation using AI
6.1 Generating and transforming data with LLMs
6.1.1 Prompting LLMs to generate simple data sets
6.1.2 Transforming test data into different formats
6.2 Processing complex test data with LLMs
6.2.1 Using format standards in prompts
6.2.2 SQL exports as prompt guides
6.3 Setting up LLMs as test data managers
6.3.1 Setting up an OpenAI account
6.3.2 Connecting to OpenAI
6.4 Benefiting from generated test data
7 Accelerating and improving UI automation using AI
7.1 Rapidly creating UI automation
7.1.1 Setting up a project
7.1.2 Creating our initial check with ChatGPT support
7.1.3 Filling in gaps from generated code
7.2 Improving existing UI automation
7.2.1 Updating state management to use an appropriate layer
7.2.2 Getting into the groove with AI tools
8 Assisting exploratory testing with artificial intelligence
8.1 Organizing exploratory testing with LLMs
8.1.1 Augmenting identified risks with LLMs
8.1.2 Augmenting charter lists with LLMs
8.2 Using LLMs during exploratory testing
8.2.1 Establishing an understanding
8.2.2 Creating data requirements for a session
8.2.3 Exploring and investigating bugs
8.2.4 Using LLMs to assist exploratory testing
8.3 Summarizing testing notes with LLMs
9 AI agents as testing assistants
9.1 Understanding AI agents and LLMs
9.1.1 What defines an AI agent?
9.1.2 How an agent works with LLMs
9.2 Creating an AI Test Assistant
9.2.1 Setting up our dummy AI agent
9.2.2 Giving our AI agent functions to execute
9.2.3 Chaining tools together
9.3 Moving forward with AI test assistants
9.3.1 Examples of AI test assistants
9.3.2 Handling the challenges of working with agents
Part 3
10 Introducing customized LLMs
10.1 The challenge with LLMs and context
10.1.1 Tokens, context windows, and limitations
10.1.2 Embedding context as a solution
10.2 Embedding context further into prompts and LLMs
10.2.1 RAG
10.2.2 Fine-tuning LLMs
10.2.3 Comparing the two approaches
10.2.4 Combining RAG and fine-tuning
11 Contextualizing prompts with retrieval-augmented generation
11.1 Extending prompts with RAG
11.2 Building a RAG setup
11.2.1 Building our RAG framework
11.2.2 Testing our RAG framework
11.3 Enhancing data storage for RAG
11.3.1 Working with Vector databases
11.3.2 Setting up a vector-database-backed RAG
11.3.3 Testing a Vector-database-backed RAG framework
11.3.4 Going forward with RAG frameworks
12 Fine-tuning LLMs with business domain knowledge
12.1 Exploring the fine-tuning process
12.1.1 A map of the fine-tuning process
12.1.2 Goal setting
12.2 Executing a fine-tuning session
12.2.1 Preparing data for training
12.2.2 Preprocessing and setup
12.2.3 Working with fine-tuning tools
12.2.4 Setting off a fine-tuning run
12.2.5 Testing the results of a fine-tune
12.2.6 Lessons learned with fine-tuning
appendix A Setting up and using ChatGPT
appendix B Setting up and using GitHub Copilot
B.1 Setting up Copilot
B.1.1 Setting up a Copilot account
B.1.2 Installing the Copilot plugin
B.1.3 Granting access to your Copilot account
B.2 Working with Copilot
B.2.1 Exploring suggestions
appendix C Exploratory testing notes
index
brief contents
contents
foreword
preface
acknowledgments
about this book
Who should read this book
How this book is organized: A road map
About the code
liveBook discussion forum
about the author
about the cover illustration
Part 1
1 Enhancing testing with large language models
1.1 Recognizing the effect of AI tools on testing and development
1.1.1 Data generation
1.1.2 Automated test building
1.1.3 Test design
1.2 Delivering value with LLMs
1.2.1 A model for delivering value
1.2.2 Using both human and AI abilities
1.2.3 Being skeptical of LLMs
2 Large language models and prompt engineering
2.1 LLMs explained
2.2 Avoiding the risks of using LLMs
2.2.1 Hallucinations
2.2.2 Data provenance
2.2.3 Data privacy
2.3 Improving results with prompt engineering
2.4 Examining the principles of prompt engineering
2.4.1 Principle 1: Write clear and specific instructions
2.4.2 Tactic 1: Use delimiters
2.4.3 Tactic 2: Ask for structured output
2.4.4 Tactic 3: Check for assumptions
2.4.5 Tactic 4: Few-shot prompting
2.4.6 Principle 2: Give the model time to “think”
2.4.7 Tactic 1: Specify the steps to complete the task
2.4.8 Tactic 2: Instruct the model to work out its own solution first
2.5 Working with various LLMs
2.5.1 Comparing LLMs
2.5.2 Examining popular LLMs
2.6 Creating a library of prompts
2.7 Solving problems by using prompts
3 Artificial intelligence, automation, and testing
3.1 The value of testing
3.1.1 A different way of thinking about testing
3.1.2 A holistic approach to testing
3.2 How tools help with testing
3.2.1 Automation bias
3.2.2 Being selective with tooling
3.3 Knowing when to use LLMs in testing
3.3.1 Generative capabilities
3.3.2 Transformation capabilities
3.3.3 Enhancing capabilities
3.3.4 LLMs in use in testing
Part 2
4 AI-assisted testing for developers
4.1 Examining the rise of the automated developer
4.2 Pairing with LLMs
4.2.1 Analyzing ideas
4.2.2 Analyzing code
4.2.3 Recognizing that a simulation is better than nothing at all
4.3 Building in quality with AI assistance
4.4 Creating our first TDD loop with LLMs
4.4.1 Preparing the work
4.4.2 Loop 1: Save a timesheet entry
4.4.3 Loop 2: Retrieve a timesheet entry
4.4.4 Loop 3: Calculating times for a project
4.4.5 Refactoring code
4.5 Improving documentation and communication with LLMs
4.5.1 Generating code comments
4.5.2 Generating release notes
4.6 Maintaining a balance with code assistants
5 Test planning with AI support
5.1 Defining test planning in modern testing
5.1.1 Test planning, LLMs, and area of effect
5.2 Focused prompts with the use of models
5.2.1 Weak prompts mean weak suggestions
5.2.2 What are models and why can they help
5.3 Combining models and LLMs to assist test planning
5.3.1 Creating a model to identify prompts
5.3.2 Experimenting with different model types
5.4 LLMs and test cases
5.4.1 Having a healthy skepticism of generated risks and test cases
6 Rapid data creation using AI
6.1 Generating and transforming data with LLMs
6.1.1 Prompting LLMs to generate simple data sets
6.1.2 Transforming test data into different formats
6.2 Processing complex test data with LLMs
6.2.1 Using format standards in prompts
6.2.2 SQL exports as prompt guides
6.3 Setting up LLMs as test data managers
6.3.1 Setting up an OpenAI account
6.3.2 Connecting to OpenAI
6.4 Benefiting from generated test data
7 Accelerating and improving UI automation using AI
7.1 Rapidly creating UI automation
7.1.1 Setting up a project
7.1.2 Creating our initial check with ChatGPT support
7.1.3 Filling in gaps from generated code
7.2 Improving existing UI automation
7.2.1 Updating state management to use an appropriate layer
7.2.2 Getting into the groove with AI tools
8 Assisting exploratory testing with artificial intelligence
8.1 Organizing exploratory testing with LLMs
8.1.1 Augmenting identified risks with LLMs
8.1.2 Augmenting charter lists with LLMs
8.2 Using LLMs during exploratory testing
8.2.1 Establishing an understanding
8.2.2 Creating data requirements for a session
8.2.3 Exploring and investigating bugs
8.2.4 Using LLMs to assist exploratory testing
8.3 Summarizing testing notes with LLMs
9 AI agents as testing assistants
9.1 Understanding AI agents and LLMs
9.1.1 What defines an AI agent?
9.1.2 How an agent works with LLMs
9.2 Creating an AI Test Assistant
9.2.1 Setting up our dummy AI agent
9.2.2 Giving our AI agent functions to execute
9.2.3 Chaining tools together
9.3 Moving forward with AI test assistants
9.3.1 Examples of AI test assistants
9.3.2 Handling the challenges of working with agents
Part 3
10 Introducing customized LLMs
10.1 The challenge with LLMs and context
10.1.1 Tokens, context windows, and limitations
10.1.2 Embedding context as a solution
10.2 Embedding context further into prompts and LLMs
10.2.1 RAG
10.2.2 Fine-tuning LLMs
10.2.3 Comparing the two approaches
10.2.4 Combining RAG and fine-tuning
11 Contextualizing prompts with retrieval-augmented generation
11.1 Extending prompts with RAG
11.2 Building a RAG setup
11.2.1 Building our RAG framework
11.2.2 Testing our RAG framework
11.3 Enhancing data storage for RAG
11.3.1 Working with Vector databases
11.3.2 Setting up a vector-database-backed RAG
11.3.3 Testing a Vector-database-backed RAG framework
11.3.4 Going forward with RAG frameworks
12 Fine-tuning LLMs with business domain knowledge
12.1 Exploring the fine-tuning process
12.1.1 A map of the fine-tuning process
12.1.2 Goal setting
12.2 Executing a fine-tuning session
12.2.1 Preparing data for training
12.2.2 Preprocessing and setup
12.2.3 Working with fine-tuning tools
12.2.4 Setting off a fine-tuning run
12.2.5 Testing the results of a fine-tune
12.2.6 Lessons learned with fine-tuning
appendix A Setting up and using ChatGPT
appendix B Setting up and using GitHub Copilot
B.1 Setting up Copilot
B.1.1 Setting up a Copilot account
B.1.2 Installing the Copilot plugin
B.1.3 Granting access to your Copilot account
B.2 Working with Copilot
B.2.1 Exploring suggestions
appendix C Exploratory testing notes
index