–> <!–>
GoogleThe Gem assessed my performance as the salesperson – “demonstrated a good grasp of sales fundamentals while navigating the challenges presented by a hesitant prospect” – and even offered several areas for improvement: “Your communication style could have been slightly warmer and more engaging.”
Not being a career salesperson, I have no idea if all of this advice amounts to good coaching. It probably doesn’t rise to the level of a legendary coach, such as Jordan Belfort, the Wolf of Wall Street, and his Straight Line System.
Nevertheless, it seems there’s some value here. Having the transcript of the entire chat, which is saved in the sidebar, is a nice takeaway if you want to go back and review the chat session.
Some limitations are glaringly obvious after going through the exercise. One is that the Gem, while being consistent in tone during the half-hour exchange, doesn’t go back to earlier points and only moves forward. In a real coaching session, the coach should be able to connect later turns of the conversation with earlier turns.
Also: 4 Apple AI features that ChatGPT already offers (and two more that are coming soon)
I also think that sentiment is true for collaborative activities, such as brainstorming a birthday party or working on a resume.
That limitation strikes me as a general issue with large language models. The model probably requires more effective use of the context window, all the stuff typed earlier in the exchange. I suspect that’s an engineering challenge that requires further development of the underlying Gemini model.
Second, it appears the Gem relies on its very general knowledge of selling from within whatever training data was used to develop Gemini. For these focused use cases, I suspect the Gem app could benefit from retrieval-augmented generation (RAG), an increasingly popular Gen AI technique, where the AI model taps into an external database. That approach might allow the Gem to get more resources for domain-specific sales knowledge.
Third, the underlying process might benefit from storing simple background knowledge in the form of sentences, which is something OpenAI offers in its “memory” function. Storing background knowledge in that way means someone could use a Gem without re-inventing things with each chat.
Also: Google’s new Gemini models achieve ‘near-perfect recall’
For example, if you’re a salesperson, you should be able to store background information such as, “I sell a subscription tech newsletter for $30”, and have the Gem automatically incorporate that fact each time you have a chat.
This brings me to the fourth and most glaring omission – Gems have no record of past conversations. Even though there is a transcript stored of each chat with the Gem, the Gem itself starts blank each time you use it. You can’t ask the Gem to explore something from a prior session because that’s not part of the Gem’s context window anymore, as it has become the past.
Also: I tried ChatGPT’s memory function and found it intriguing but limited
That’s a big deficit if you want to return to use the Gem over and over. For example, if you want another coaching session, you should be able to explore the things that came up in a prior coaching session and improve upon that exchange, as an additive process, rather than starting from scratch.
Imagine having a real-world coach – of any kind, sales, fitness, ice hockey, whatever – who never remembered where you last left off in your long journey to get better. You’d probably seek a coach who paid more attention and had a memory.
Despite those shortcomings, Gems have the value of bringing a user up to speed on the basics of prompt engineering. That capability is useful for a generalist audience unaware that prompt engineering exists.