Now Writing Weekly Blogs As I Travel

AI Text Writer

Improving Emails using AI

It's no secret that writing has never been my strong suit. I often struggle to transform the jumbled words in my head into clear, coherent, and professional text on the page. This has become a frustration on a daily basis, where I'm required to send numerous emails to a diverse audience with varying technical backgrounds and expertise levels.

Recently i have found a solution to this problem. ChatGPT; I simply input my scrambled thoughts and it generates a well written email. Lately however i have been thinking about a few drawbacks to this solution.

  • Data: I don't like the idea of sending sensitive emails to a third party service. I am not sure what they do with the data and if they store it or not and redacting sensitive information can be a pain.
  • Convinence: The process of copying text, changing tabs, pasting and then doing the reverse can be annoying, especially when having different browser profiles for development open at the same time.
  • Cost: The service costs money to use.
  • The idea of this project was to create a simple chrome extension that will allow me to highlight text in a text area, right click and select to get a better written version of the text. A model then pops up and i can see the better written version of the text.

    The Tech

    The technical solution was straightforward, with the brunt of the work handled by Ollama. I will create (or have created, depending on when you are reading this) another article to explore Ollama in detail. In short, Ollama provides a method for running local Large Language Models (LLMs) on your own equipment. By default, Ollama is equipped with a REST API, facilitating interaction with any model you've downloaded locally.

    Despite minor CORS issues (fixed by running launchctl setenv OLLAMA_ORIGINS "*"), I managed to execute a simple cURL request and received a response from the LLM. The advantage of using Ollama lies in its flexibility; you can modify the initial instructions sent to the LLM and replace the LLM with another model that better fits your needs.

    1 2 3 4 curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt": "Why is the sky blue?" }'

    After that it is as simple as hacking together a basic chrome extension, loading it into chrome and its good to go. I have included a link to the code here, its far from 'production' ready but its good enough to hack around.

    Following this, hacking together a basic Chrome extension and loading it into Chrome is relatively straightforward, and it's ready to use. I've provided a link to the code here. While it's not quite ready for production, it's fine for experimentation and running locally.