Adding an LLM support to this forum

sdlucid

Member
Verified Owner
Joined
Apr 21, 2024
Messages
82
Reaction score
75
Location
San Diego, CA
Cars
2023 Lucid Air Pure
Referral Code
4G1YMS5T
Not sure if it's true as I don't really use meta apps, but I just came across some facebook screenshots where their AI model summarizes discussion threads in fb groups or in popular fb posts. Given that this forum repeatedly gets the same topic and questions that was discussed over and over, I thought it may be a great idea if such a feature exists in some form. Moreover, many such discussion threads are too long for anyone to spend reasonable time to read through. And again, I have no idea if this topic is already discussed before :D
 
You can do it yourself: just plug in the URL of the thread you want summarized.

Lucid Air owners discuss the potential for using Tesla’s North American Charging Standard (NACS) Superchargers. Currently, Tesla’s Magic Dock enables some non-Tesla EVs to charge at Superchargers, though Lucid cars aren’t yet NACS-compatible. Lucid may offer a NACS adapter or direct compatibility by 2025, but no official announcement has been made. Some third-party adapters are available, like TeslaTap and Lectron, though Lucid users await broader compatibility at Tesla stations.

For more details, visit Lucid Owners forum thread.
 
You can do it yourself: just plug in the URL of the thread you want summarized.

Lucid Air owners discuss the potential for using Tesla’s North American Charging Standard (NACS) Superchargers. Currently, Tesla’s Magic Dock enables some non-Tesla EVs to charge at Superchargers, though Lucid cars aren’t yet NACS-compatible. Lucid may offer a NACS adapter or direct compatibility by 2025, but no official announcement has been made. Some third-party adapters are available, like TeslaTap and Lectron, though Lucid users await broader compatibility at Tesla stations.

For more details, visit Lucid Owners forum thread.
Wow! It really IS something @borski could throw together in about three minutes! I thought this would be a daunting task. Now, I guess the prevailing ChatGPT question is how accurate the summary will be.
 
Yes, that would be the right question ....
 
Very cool. But now I have a follow up question. I'm almost always using the lucid owners forum app. How do I find the URL for any given thread?
You can do it yourself: just plug in the URL of the thread you want summarized.

Lucid Air owners discuss the potential for using Tesla’s North American Charging Standard (NACS) Superchargers. Currently, Tesla’s Magic Dock enables some non-Tesla EVs to charge at Superchargers, though Lucid cars aren’t yet NACS-compatible. Lucid may offer a NACS adapter or direct compatibility by 2025, but no official announcement has been made. Some third-party adapters are available, like TeslaTap and Lectron, though Lucid users await broader compatibility at Tesla stations.

For more details, visit Lucid Owners forum thread.
 
On an iPhone you would just touch the thread title and hold until you got the option to copy. Press copy and then paste into Chat GTP.
 
Not sure ChatGPT API access is free....but maybe Broski is a baller baller, and willing to caught up the tokens to let people summarize threads instead of read them.

Personally, half the fun of forums has always been searching and learning before jumping into conversations. Facebook groups and reddit have sort of killed the forum format but I still think its a valuable way to share info and aggregate data.

Maybe instead of ChatGPT we can use Ollama: https://ollama.com/
 
I like the idea. I don't know how to implement it though. This is something the XenForo software team could be working on. I will keep my eye out and provide the suggestion to them.
 
Actually, this is a fantastic idea and I know just the guy. @LucidDropkick this is literally all you.
 
Very cool. But now I have a follow up question. I'm almost always using the lucid owners forum app. How do I find the URL for any given thread?
There’s an app?
 
There’s an app?
Lol yeah. I can't believe I knew literally anything about it before you did. I saw a notice about it on my mobile browser a while back and I've been using it since. Pretty useful to just have it on my home screen. But there's no address bar at the top to copy and paste to any AI site.
 

Attachments

  • Screenshot_20241114_163803_One UI Home.webp
    Screenshot_20241114_163803_One UI Home.webp
    54.7 KB · Views: 64
Bottom left
I don’t think that’s an app, my friend. That is simply a bookmark saved to your home screen. I’m willing to be wrong here, but if there really is an app, please send a link to the App Store for it.
 
Can you literally run to the App Store? I did yesterday when this was first posted ....
 
I don’t think that’s an app, my friend. That is simply a bookmark saved to your home screen. I’m willing to be wrong here, but if there really is an app, please send a link to the App Store for it.
Guess you're correct. Sorry for the confusion
 
Not sure if it's true as I don't really use meta apps, but I just came across some facebook screenshots where their AI model summarizes discussion threads in fb groups or in popular fb posts. Given that this forum repeatedly gets the same topic and questions that was discussed over and over, I thought it may be a great idea if such a feature exists in some form. Moreover, many such discussion threads are too long for anyone to spend reasonable time to read through. And again, I have no idea if this topic is already discussed before :D
Not sure how much traffic this forum gets, but depending on how much a feature like that would get used, the cost would rack up quick depending on how much inferencing would need to be done at peak and then the resources needed to cover that would need to left on accumulating cost, or someone would have to eat the token cost for a shared hosted option for everyone to use. I say just pull down llama, run it locally and point it here. Or run LMstudio, find a LLM there that you can run locally and point it here. All of that will run well even without a GPU.
 
Thanks for the shoutout @borski !

Depending on how much data there is to parse though, running an LLM wouldn't really be all that expensive.

Vectorizing the entire DB of text into RAG may be a problem though. If we're talking gigabytes of text, the risk of hallucination goes up significantly. And RAG definitely has its limitations.

A much more efficient option in my opinion would be to consider integrating Jina. Jina is capable of scraping and turning any publicly accessible page into pure markdown, which can then be read by even small 8B models with an extremely high degree of accuracy. It would mean zero RAG, no vectorizing, just straight one-shot LLM queries. And since it would be operating in a JIT (just in time) style method, there is no major storage to consider.

Here's a really, really quick PoC. @joec please don't cringe too hard at my coding skills, that's not my forte!

Bash:
#!/bin/bash

API_KEY="Borski would murder me if he found out I hardcoded an API key"  # Your OpenAI API Key
URL="https://r.jina.ai/lucidowners.com/threads/adding-an-llm-support-to-this-forum.10547/#post-233593"


# Get the markdown directly.
markdown=$(curl -s "$URL")

# Escape double quotes and backslashes for JSON
escaped_markdown=$(echo "$markdown" | sed 's/"/\\"/g' | sed 's/\\/\\\\/g') # Escape backslashes as well


# Construct the JSON payload. 
json_payload=$(jq -n \
                  --arg markdown "$escaped_markdown" \
                  '{model: "gpt-4o", messages: [{role: "system", "content": "You are a helpful assistant."}, {role: "user", "content": "Summarize the following: \($markdown)"}]}')



response=$(curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $API_KEY" \
  -d "$json_payload" --fail)



summary=$(echo "$response" | jq -r '.choices[0].message.content')

echo "Summary:"
echo "$summary"

So what that would give us would be this:

ZR6wdVX.gif


Obviously there's so much more here that can be done, especially with prompting, passing user parameters, etc. There's also the question of dealing with pagination.

Jina uses a client-side API key that gets generated the moment you visit their URL (you don't need an account) so there's nothing to configure on the forum's side.
 
The forum has something called Progressive Web App.

Click the menu on your mobile device and click install.
 

Attachments

  • IMG_0763.webp
    IMG_0763.webp
    57.7 KB · Views: 28
Guess you're correct. Sorry for the confusion

To your credit, the definition of "app" has become wildly diluted in the past few years. Most of us who were around before smartphones, and right at the beginning of the smartphone boom, remember "apps" as being native code, native UI toolkits, native everything running directly on the device. These days, "app" has blown up to just mean "a highly focused web browser". Progressive Web Apps are essentially just Chrome or Safari, without the buttons or the address bar (and a few minor things). They're more akin to bookmarks on steroids.
 
Back
Top