自動化とAIエージェント

can you see it?

technicality

Giving more thought to some of the technical aspects of my skill stack.

Neo Lee

Automation & AI Agents

automation & problem-solving

It's important to understand the contexts of a problem before automating systems.

Sometimes the best solution is simply changing/reorganizing the process or using creative/dynamic solutions within a tool, before constructing an elaborate automation.

Other times, it involves hacking together various APIs to make something happen or using code or serverless functions to augment your no/low-code stack.

prompt engineering

This requires understanding contextual, linguistic, and reasoning nuances.

At a complex level, prompting can simulate ML/NLP algorithms or fine-tuning with natural language; other times it's using intelligent prompting that abstracts out contexts/parameters from a large dataset to manage complexity and token limit.

Prompting is not an exact science. LLM is a black box and each model has its own quirks, so having a systematic way to test and iterate prompts before deploying is good practice.

llm & programming

Half the battle of getting AI to do things well is knowledge organization. Depending on your use case, autonomous agents or RAG can be built to:

  • intelligently recognize which tools to call and what arguments to pass into,

  • dynamically pass contexts into an LLM based on each query,

  • or leverage semantic chunking, reranker, and other RAG-related techniques to augment your custom knowledge base, thereby improving the outputs of your chatbot.

technicality

Giving more thought to some of the technical aspects of my skill stack.

Neo Lee

Automation & AI Agents

automation & problem-solving

It's important to understand the contexts of a problem before automating systems.

Sometimes the best solution is simply changing/reorganizing the process or using creative/dynamic solutions within a tool, before constructing an elaborate automation.

Other times, it involves hacking together various APIs to make something happen or using code or serverless functions to augment your no/low-code stack.

prompt engineering

This requires understanding contextual, linguistic, and reasoning nuances.

At a complex level, prompting can simulate ML/NLP algorithms or fine-tuning with natural language; other times it's using intelligent prompting that abstracts out contexts/parameters from a large dataset to manage complexity and token limit.

Prompting is not an exact science. LLM is a black box and each model has its own quirks, so having a systematic way to test and iterate prompts before deploying is good practice.

llm & programming

Half the battle of getting AI to do things well is knowledge organization. Depending on your use case, autonomous agents or RAG can be built to:

  • intelligently recognize which tools to call and what arguments to pass into,

  • dynamically pass contexts into an LLM based on each query,

  • or leverage semantic chunking, reranker, and other RAG-related techniques to augment your custom knowledge base, thereby improving the outputs of your chatbot.

technicality

Giving more thought to some of the technical aspects of my skill stack.

Neo Lee

Automation & AI Agents

automation & problem-solving

It's important to understand the contexts of a problem before automating systems.

Sometimes the best solution is simply changing/reorganizing the process or using creative/dynamic solutions within a tool, before constructing an elaborate automation.

Other times, it involves hacking together various APIs to make something happen or using code or serverless functions to augment your no/low-code stack.

prompt engineering

This requires understanding contextual, linguistic, and reasoning nuances.

At a complex level, prompting can simulate ML/NLP algorithms or fine-tuning with natural language; other times it's using intelligent prompting that abstracts out contexts/parameters from a large dataset to manage complexity and token limit.

Prompting is not an exact science. LLM is a black box and each model has its own quirks, so having a systematic way to test and iterate prompts before deploying is good practice.

llm & programming

Half the battle of getting AI to do things well is knowledge organization. Depending on your use case, autonomous agents or RAG can be built to:

  • intelligently recognize which tools to call and what arguments to pass into,

  • dynamically pass contexts into an LLM based on each query,

  • or leverage semantic chunking, reranker, and other RAG-related techniques to augment your custom knowledge base, thereby improving the outputs of your chatbot.

©Invisible Factor

©Invisible Factor