
When Google developed the Go programming language in 2009, C++ still ruled, despite many tech pros’ difficulties with it. Meanwhile, newer languages such as JavaScript and Python had the latest and greatest features and were much easier to work with, but were often too slow for many programmers.
Like C++, Go is a fully compiled language. It includes features that at the time were rarely found in compiled languages, such as automatic memory management and “garbage collection” which allows programmers to create objects without having to manually track and remove them from memory when they’re no longer needed.
One thing that especially stands out with Go: it has concurrency built right into it. It’s very easy to write functions and lines of code that run simultaneously, using a feature called “goroutines.” These run-in threads, managed by the Go runtime (which gets compiled right into the app), can be distributed among separate cores in the processor so they can all run simultaneously.
Optionally, the OS can decide to run them each on a single core, in which case they need to pause and relinquish control so other pieces of code can run for a bit. (Note, that today’s CPUs typically have some cores that can run two threads independently,, so those cores would have two such threads running.) Because of the way goroutines are built into the language, multithreaded and parallel programming are incredibly easy to implement with Go. And that makes it an ideal language to use for AI, which typically needs lots of cores and threads to function.
Working with LLMs and Generative AI
Large Language Models (LLMs) are the basis of today’s language-based generative AI. Learning how to write code that interacts with LLMs requires some knowledge of how this technology works, but you can learn it in stages.
The easiest way to integrate an LLM into your coding work is to simply use an online API such as OpenAI’s API. That means there’s very little overhead; you don’t have to install an LLM on your computer, and you need very little computing resources. (To work with LLMs locally, you typically need a powerful GPU, but by calling APIs like this, you don’t need any of that.) If you want to build AI-based applications in Go using remote APIs, here’s what you’ll want to learn:
openai-go: This is the official Go library for OpenAI, written by the people at OpenAI. Note, however, that this is a relatively new library, and it’s still considered in beta at the time of this writing. On this web page you’ll find instructions on how to use it, as well as plenty of examples; you’ll need to sign up with an API key at OpenAI here.
Once you’re comfortable making calls to OpenAI’s API, a next step is to use a library that lets you work with LLMs stored locally on your computer while accessing them through an interface such as the OpenAI API. This will help you understand how and where to store the LLM, and what system requirements you need.
One great example here is called LocalAI. Note that LocalAI runs as a web server, meaning you don’t need a language-specific version of it. Instead, you install it, and then have your Go app connect to it instead of OpenAI’s API. One interesting aspect you’ll want to check out is that you can choose what LLMs to use.
There’s one caveat, though; it runs in Docker, a container system that’s used heavily in the software industry. If you’re not familiar with it, you should spend some time learning it, as Docker alone could potentially be a make-or-break when you land a job interview. Being a software developer requires that you know how to use Docker and now is as good as any time to learn it.
Also, while in this step, spend some time studying how LLMs work. There are many resources online but be selective: You don’t need to learn how LLMs work internally. There’s a lot of math and science involved, and most developers won’t need this information unless they plan to go to work for a company that builds LLMs. Instead, look for topics like:
- Definition of tokens and embeddings
- Definition of sentence transformers
- Steps LLMs use to read a prompt and generating a response
As you proceed in your studies, you might be comfortable stopping at the previous step (using LocalAI) and not really care about how to interact with an LLM directly in a programming context. However, continuing on your learning journey will only make you a better developer in the context of AI.
And here’s where things get a little tricky with Go. While the Python language has lots of great libraries for interacting with LLMs, Go doesn’t have as many—and the existing ones require a fair amount of knowledge of how LLMs work (hence our earlier suggestion about learning what you can about them).
Nevertheless, there are some bindings to libraries written in other languages, allowing you to use LLMs from within Go. One popular C++ library is called llama.cpp. Members of the Go community have developed Go bindings so you can use llama.cpp from within Go—called go-llama.cpp. (Don’t let the .cpp extension fool you; it’s part of the original llama’s name, and the developers of the go bindings kept that in their name as well. It is, indeed, a Go library.)
Once you’re comfortable working with LLMs, move on to Sentence Transformers.
Machine Learning
Machine Learning is a branch of AI where the apps learn new information as new data comes in. For today’s generation of software developers, this might not seem very shocking. But in the olden days of software development, apps had their data stored in a database, and everything the app used was either what the developers put in the database or coded directly into the app. New data would come in, of course, but that was all the apps “knew.”
Amazon, however, pioneered machine learning technology whereby the apps on their servers would analyze a shopper’s trends (such as purchases and browsing) and process that information to learn more about the shopper. The apps could find product suggestions and display them when the user logged in.
That was just the beginning of modern machine learning: The apps would look at existing data and come up with new data. To accomplish this, the apps would rely heavily on statistics. If you’re wondering how machine learning will interact with Go, start here:
Deep Learning
Deep learning takes machine learning to the next level by bringing in a neural network.
Neural networks date back to—no kidding—the 1940s, with the idea being of building systems (either software or hardware) that model the human brain and its neurons. In the 1980s, computer scientists were able to create some rather rudimentary neural networks (compared to what we have today) that laid the groundwork for today’s version.
If you’re interested in using deep learning with Go, you’ll want to make sure you understand the different types of deep learning. We don’t have room here to go into the individual details, but you’ll want to Google the different types of neural networks and read how they work, including their similarities and their differences. Terms to look up include:
- Convolutional neural networks (CNNs)
- Recurrent neural networks (RNNs)
- Feedforward neural networks
To get started, check out this gentle introduction on GeeksForGeeks. When you’re ready to get going with neural networks in Go, you can try out these libraries:
- gogonia
- go-torch and gotch: These are both bindings to a library called PyTorch, which is the most popular machine learning library. These two libraries, go-torch and gotch, let you use PyTorch from within your Go code.
- TensorFlow for Go: This is a library that lets you work with TensorFlow from within a Go application. TensorFlow is a machine learning library built by Google. It’s quite popular, although it’s mostly used in research and academic settings. Still, even if you don’t work in research, you’ll likely want to learn it if you want to become a deep learning expert.
Natural Language Processing
Natural language Processing (NLP) is a type of AI that includes LLMs and sentence transformers. If you’re working with both LLMs and sentence transformers, you’ll want to study as much as you can about NLP and try out some general NLP libraries. You can find a huge list of libraries here.
Computer Vision
Computer vision is a field of AI that allows computers to understand visual information. This includes being able to identify and classify information found in images, including facial recognition.
The most widely used library (regardless of language) for computer vision is called OpenCV. If you’re interested in computer vision, you’ll want to master this library. It’s big; check out the Wikipedia page’s list of applications. OpenCV is written mostly in C++, but you can access it from any of the popular languages. To use it from within a Go app:
- GoCV: This is a set of bindings for Go that call into the OpenCV library. It’s mature and polished.
Sentence Transformers in Go
Sentence transformers are like LLMs, except they work with entire sentences, in contrast to LLMs, which process sentences one word at a time (called tokens, which are sometimes partial words). This is a big topic of AI within the Python community. With sentence transformers, you can build intelligent search engines that work alongside your own knowledgebase.
The intersection between Go and AI is relatively new, and there isn’t much available when it comes to sentence transformers in this context. If you’re new to AI and want to study sentence transformers, you’ll probably want to start with Python instead. Take time to understand how the sentence transformers library works, how to use it, what it means to tokenize a sentence, and so on. Then, once you’re comfortable with the technology, you can switch back to Go. (In the next couple of years, somebody will probably make some decent sentence transformer libraries in Go; there are some early projects in GitHub you might want to keep an eye on.)