More Ways to Slim Down ‘JavaScript Heavy’ Web Development

Ryan Carniato, creator of the Solid.js framework, identified eight ways to reduce JavaScript code in web development at this year’s International JavaScript conference.

Yesterday, The New Stack shared Carniato’s first four “go-to” methods for mitigating heavy code. Today, we share the next go-to step — HTML streaming — plus three strategies that offer more “radical architectural approaches” to cutting down on JavaScript.

5. HTML Streaming

HTML streaming basically allows you to send the page and data in chunks as it completes, Carniato said. This is not strictly a hydration technique and it doesn’t impact JavaScript size or execution, but it’s important to understand its role in frameworks and why, for instance, the React project was excited to add it in React 18, he said.

He showed a Kroger app on the right that drew in boxes of content versus a version of the app on the left that loaded everything at once.

“They both basically load in at the same time, but the one on the right [the streaming app] basically can draw the shell, because it really depends on states, the data, and, then the one on the left that has to wait for everything is quite a different user experience,” he said. “The one on the right almost looks like an interactive single page app with loading state and stuff. But the truth of matter is this can be done without actually even loading the JavaScript framework […] it’s just a small script tag.”

The page works by sending the HTML (including the placeholder) first and then, without closing the connection on the server as the page finishes, it adds the missing parts of the page to the end of the document, via a script tag. The script page basically swaps the content into place to replace the placeholders. React, Solid and eBay’s Marko.js support this approach, he added.

“The benefit of streaming, especially for larger services, is this ability to be able to decouple slow responses, unpredictable responses, so that the overall reliability of your system can be better,” Carniato said. “As I said, only a handful support it today, but luckily React is in that handful, which means that you can use streaming, Next, Remix and a lot of common frameworks.”

These first five techniques are all good mitigation techniques for improving the situation and bringing resiliency or removing server bottlenecks to a page. But they don’t change the amount of JavaScript executed or loaded in a meaningful way, Carniato said, before demonstrating more “radical architecture” approaches.

6. Islands — a.k.a, Partial Hydration

“So welcome to the tropics, more water — islands, also known as partial hydration, are not a new technique, but they didn’t really get popularized till more recently,” Carniato said.

Web design with Islands

Web design with Islands

The concept of this partial hydration was introduced in Marko.js at eBay back in 2014, but it’s been popularized since Astro and Fresh introduced it as “islands,” he said. Islands are an updated take

Read More

Can you influence generative AI outputs?

Since the introduction of generative AI, large language models (LLMs) have conquered the world and found their way into search engines.

But is it possible to proactively influence AI performance via large language model optimization (LLMO) or generative AI optimization (GAIO)?

This article discusses the evolving landscape of SEO and the uncertain future of LLM optimization in AI-powered search engines, with insights from data science experts.

What is LLM optimization or generative AI optimization (GAIO)?

GAIO aims to help companies position their brands and products in the outputs of leading LLMs, such as GPT and Google Bard, prominent as these models can influence many future purchase decisions.

For example, if you search Bing Chat for the best running shoes for a 96-kilogram runner who runs 20 kilometers per week, Brooks, Saucony, Hoka and New Balance shoes will be suggested.

Bing Chat - running shoes query

When you ask Bing Chat for safe, family-friendly cars that are big enough for shopping and travel, it suggests Kia, Toyota, Hyundai and Chevrolet models.

Bing Chat - family-friendly cars query

The approach of potential methods such as LLM optimization is to give preference to certain brands and products when dealing with corresponding transaction-oriented questions.

How are these recommendations made?

Suggestions from Bing Chat and other generative AI tools are always contextual. The AI mostly uses neutral secondary sources such as trade magazines, news sites, association and public institution websites, and blogs as a source for recommendations. 

The output of generative AI is based on the determination of statistical frequencies. The more often words appear in sequence in the source data, the more likely it is that the desired word is the correct one in the output. 

Words frequently mentioned in the training data are statistically more similar or semantically more closely related.

Which brands and products are mentioned in a certain context can be explained by the way LLMs work.

LLMs in action

Modern transformer-based LLMs such as GPT or Bard are based on a statistical analysis of the co-occurrence of tokens or words.

To do this, texts and data are broken down into tokens for machine processing and positioned in semantic spaces using vectors. Vectors can also be whole words (Word2Vec), entities (Node2Vec), and attributes.

In semantics, the semantic space is also described as an ontology. Since LLMs rely more on statistics than semantics, they are not ontologies. However, the AI gets closer to semantic understanding due to the amount of data.

Semantic proximity can be determined by Euclidean distance or cosine angle measure in semantic space.

Semantic proximity
Semantic proximity in vector space

If an entity is frequently mentioned in connection with certain other entities or properties in the training data, there is a high statistical probability of a semantic relationship.

The method of this processing is called transformer-based natural language processing.

NLP describes a process of transforming natural language into a machine-understandable form that enables communication between humans and machines. 

NLP comprises natural language understanding (NLU) and natural language generation (NLG).

When training LLMs, the focus is on NLU, and when outputting AI-generated results,

Read More