You are viewing a single comment's thread from:

RE: Chasing shadows: Is AI text detection a critical need or a fool's errand?

in Popular STEM28 days ago

I haven't tried LLM Studio yet - do you have the option of using your GPU for processing?

The PrivateGPT that I set up was much quicker with the GPU, even using my laptop. It was a pain to set up and took me pretty much a full day because of the GPU needing CUDA in my WSL environment. So tricky, that I deleted it all and gave up but tried a 2nd time which went more smoothly.

A very quick local GPT would be to use Ollama - there are a few uncensored models available and it's about 3 or 4 command lines to install and run.

We're missing out on a massive opportunity that we should have been perfectly positioned for.

That's perhaps reflective of the majority-community perspective that AI generated content is bad. Full stop. There are only a handful of us now who want to do something different - the herd appear to be happy with their diary games and engagement challenges and choose not to look (or think) beyond that. Maybe aspiring to be a curator or representative themselves, or run a community. Nothing beyond "the template".

Sort:  
 28 days ago 

do you have the option of using your GPU for processing?

I don't remember, and I've already uninstalled it. If I get motivated, maybe I'll reinstall it and check. My PCs are so old, I think I'll need to modernize, though.

That's perhaps reflective of the majority-community perspective that AI generated content is bad. Full stop.

Yeah, true. And, to be fair, AI right now is very easy to misuse here. It's a conundrum.

Coin Marketplace

STEEM 0.29
TRX 0.12
JST 0.032
BTC 63659.40
ETH 3075.69
USDT 1.00
SBD 4.01