Probably want to stop using Booklore...
from jasonweiser@sh.itjust.works to selfhosted@lemmy.world on 13 Mar 06:30
https://sh.itjust.works/post/56734023
from jasonweiser@sh.itjust.works to selfhosted@lemmy.world on 13 Mar 06:30
https://sh.itjust.works/post/56734023
Just a PSA.
Sorry to link to Reddit, but not only is the dev sloppily using using Claude to do something like 20k line PRs, but they are completely crashing out, banning people from the Discord (actually I think they wiped everything from Discord now), and accusing people forking their code of theft.
It’s a bummer because the app was pretty good… thankfully Calibre-web and Kavita still exist.
threaded - newest
Can't check now, but if there aren't forks named like BookTale and BookStory, I'll riot.
Jokes aside, if the license he used allows forking, dude's tripping, and could even get sued depending on the country for false accusation of crime.
And ah, Discord, great for nuking inconvenient chats. Imagine if it had happened over at a public forum so people's reactions could be backed up.
And dunno where I'd draw the line, but 20k lines imo is a bit past reasonable. How would anyone audit that many in a timely manner? But with the "dev" doing that daily, that'd be hard to even pretend.
The treekie in me wants BookData.
(edit) This made me remember The Measure Of A Man and now I’m fucking depressed. They had such high hopes for the future.
The extra bit about Lore being the one who could make shit up and say what folks wanted to hear while Data was based on facts and logic isn’t lost on me either
“You want to seek out new life and new civilizations? Well THERE. IT. SITS!”
Truly an actor of all time. Sometimes I think Oscars and Emmys and all that shit should be able to be granted retroactively (hellooooo, “you broke your little ships” scene).
I self-host audiobookshelf, and it’s working pretty well for me. It doesn’t have tons of features, and the android app is a bit janky, but it does what I need and I’m happy with it.
I use it daily and I love it :)
Use it every day and it’s all I need.
Reading this whole thread I’m now glad I have Audiobookshelf already. I already use it every day for audiobooks and podcasts, but hadn’t considered it for ebooks.
Just transferred my ebook collection from Booklore to ABS. I’ll miss the Kobo Sync which seems to work in Booklore, but I certainly won’t miss the AI and vibe coding!
I literally just got this all set up and was about to hook up my wife’s kobo to it, good timing for this to come out so I don’t waste any more of our time with this slop. What a shitshow.
I just spun up Komga instead last night (I was going to set up CWA but I’ve heard sketchy things about their lead dev that don’t leave me optimistic). Very easy to get up and running, pretty basic but it seems to work well and does exactly what it needs to do. I was a bit hesitant since it seemed geared toward comics, but it’s handling regular ebooks just fine.
Wait, I use CWA… What do I need to be outraged about this time?
I don’t have the full details, but I saw some mentions in that Booklore reddit thread about CWA’s dev ignoring major issues in favor of new features and such, something like that. I admittedly didn’t really do much research into that nor the tool itself, but Komga’s Kobo support seems better, so I just went with it.
That’s not a reason to consider CWA unsafe
Hmmm… Calibre web’s kobo integration is good enough, but Komga seems to be able to sync progress as well?
I might have to try Kkmga after all.
what happened with CWA? I was thinking of using it
Don’t know that software, but that was fun to read lol
Quick! Someone add it to Open Slopware!
Man this list is depressing. Good to have handy though. Sad to see SearXNG and a few others on here.
Seriously… kitty, rawtherapee, keepassxc, python, the freaking linux kernel!
Did you read about kernel they are experimenting with using it for reviews. They have some prompts for LLM to catch issues before it gets to maintainers so it frees up time. Don’t see an issue if that is all it is.
It might be, but for some people that might, understandably, be already bad enough, a line in the sand if you will.
I’m reminded of this statement about LLMs and the kind of people who use them in the first place. It’s an early indicator that quality (and sovereignty) of the software is going to go the incline down.
Searxng? Fuck, guess I’m just not pulling a new container.
There is this that popped up the other day, but I haven’t looked into it at all to see if it’s vibecoded or not: github.com/fccview/degoog
Thanks, will dig into this!
It’s not, the second I cloned it and gave codex access it found a whole whack of privacy issues. This was 100% human coded
degoog Dev here, definitely not vibecoded. Would you be able to tell me all these whack of privacy issues? I thought I had everything covered, but if you found something concerning it’d be nice to know before I get it out of beta :)
Additional Improvements:
Made some other changes for my specific deployment. Very happy with your work so far. Thanks so much
Thanks, I’ll individually look into all of these ♥️ I’ll say some of them are more conscious compromises for the sake of an open scalable system where third party extensions can truly edit anything (intentionally) and everything around Auth/secure cookie is also fairly lax due to the fact the Auth is just a protection for the settings (which literally stop the settings from being served by the client), in the moment I decide to add some more structured Auth system/maybe users I’ll look into proper secure cookie handling.
This is an awesome report, thank you so much for sharing it!!!
Hey sorry for the delay, dealing with a lot right now, but I didn’t forget about it.
1 - Fixed this, the api key is now only forwarded if the destination hostname matches the plugin’s stored url. 2 - As I was saying, the allowlist is opt-in by design (null = allow all), and plugins legitimately need to make arbitrary outbound requests. Enforcing it globally would break the plugin system. 3 - Fixed this, it was quite simple 4 - I have added an env var (DEGOOG_DISTRUST_PROXY), if set to true it’ll make it so all users share the same rate limit regardless of their IPs, I left it as an opt in as most users currently running it are only keeping it private behind their own in house reverse proxies. This will be handy for a public instance for example 5 - Extension settings modal now correctly sends x-settings-token on save. 6 - As I said, auth is intentionally lax until a more structured auth system is added, may need to be a few weeks after stable is live, after all there’s no real auth and the setting password protected and private view should be secure enough as it is
btw all this is not live yet, it’ll be sent live with the next release ♥
Booklore is listed as an alternative to Calibre 😭
Damn it!!! 😵
Wait Calibre is there??? Oh my god no.
Fortunately this exists
@lambalicious @jasonweiser Not sure seafile should be listed as an alternative. We couldn't include it on Debian due to copyright sketchiness/plagerism...
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=928975
Geez… problems never end, do they.
I’m barely active in Codeberg. Unless someone beats me by, say, end-of-month, I might file an issue about it; that said, I’d like to be able to offer at least one (1) functional alternative rather than simply +1’ing to the complains that this or that is Never Good Enough.
That list is depressing
Especially since it is actually listed as an alternative to calibre there
It seems like the criteria for making it on there is fairly lax. Nextcloud makes a list by simply having an AI assistant as an optional (user-facing) feature, while none of the actual code appears to be AI-generated.
I’d be easier to make a list of all software that doesn’t use any AI at this point
Even if the “has an optional AI assistant” was not a thing, the repo includes an AGENTS.md file, which is also listed in the criteria, and more than qualifies it as slopware.
“No, anti AI people aren’t technophobic”
Meanwhile anti AI people:
Holy shit I almost whish I didn’t read through that list…So sad to see so many projects go down this path…
And every time the use of LLMs for open source development comes up we get the same tired spiel from people about how it’s just a tool and implications that anyone who doesn’t embrace it with jpy in their heart is just a Luddite.
It seems to me that it’s less a tool and more like intentionally infecting your project with cancer. Sure it shows all the signs of rapid growth, but metastasization isn’t sustainable or desirable. Plus I am yet to encounter a strong advocate for LLMs who isn’t a cunt.
I find an LLM is a great way to shortcut the googling itd take for me to parse random error message #506 when I’m learning a new language but that’s about it. I’m also in no way writing software meant for mass consumption.
Ergo its a tool, a search engine replacement, that we wouldn’t need if search hadn’t gone to shit due to neglect and active internal sabotage.
Oh 100%.
I’ll argue that it is a tool, and object to automatic zealous hostility towards anyone using it, but that doesn’t mean criticisms of how that tool is being used aren’t valid. It seems like that is what people are focusing on here, and they definitely aren’t Luddites for doing so.
I think I can provide you a great equivalent. Firearms, they have utility, but there are people who make them a lifestyle choice, and there are people who make them their whole personality. There are also a lot of people just desperate for an excuse to use one. I grew up with a couple of farmers in the extended family, I would never argue guns should be entirely banned, but I am so glad I live somewhere with sane laws around gun ownership. It would be so nice if we had similar consideration around regulating LLMs.
The danger to open source as I see it is that LLMs degrade the quality and ability of developers while increasing their throughput, and I have never once heard someone complain that open source lacks quantity, but I hear a lot of people complaining about the quality.
I will complain about quantity, many areas where open source projects are competing with closed source commercial products they have not achieved feature parity or a comparable level of polish, quantity matters. So does, as someone else touched on, quality of life improvements to the process of writing code like ease of acquiring and synthesizing information. That doesn’t mean it’s necessarily a worthwhile tradeoff, but how much is really being sacrificed depends on what exactly is being done with a LLM. To me one part of what’s described here that’s clearly going too far is using it to automate communication with other people contributing to the project, there’s no way that is worth it.
As for the gun thing, I will support entirely banning LLM powered weapons intended to kill people, that’s an easy choice.
I still don’t think quantity is lacking, and when quality is there it’s amazing how often Open Source becomes a defacto standard. How many video tools are just a shim over FFMPEG for example?
Yet again the problem I see is that LLMs are a seductive form of software cancer, it starts as a little help and before you know it we have booklore like projects. If open source can’t be better it will be subsumed in slop.
Not disagreeing about LLMs as a weapon. In a functional society the person who pulls the trigger on any weapon is responsible for the consequences of that action. I wonder how eager the CEOs of these “AI” companies would be to weaponise their creations if they were held personally accountable for every injury caused by their product. By a jury. Preferably with explicit laws stating they could not indemnify or gain immunity.
One example of a place where quantity is lacking is web browsers. Another might be mobile operating systems. I am glad projects like Firefox and GrapheneOS exist, but it’s obvious that the volume of work needed to achieve broad compatibility and competitiveness for these types of software is a limiting factor. As for the idea that any LLM use is a slippery slope, the way to avoid the slippery slope fallacy would be to have compelling evidence or rationale that any use really does lead naturally to problematic use; without that the argument could apply to basically any programming thing that gets to be associated with things done badly (ie. Java), but I think it isn’t usually the case that a popular tool has genuinely no good or safe ways to use it and I don’t think that’s true for AI.
How many browsers would you like me to list, yes a lot of them are spins on some of the big incumbents, but there is a much wider variety than you might credit. Rendering engines on the other hand, yeah there’s not much variety there.
Mobile operating systems are something of a special case I’m afraid, the Telcos and incumbents have got way too heavy a thumb on the scale, and if any new comer looks like breaking the duopoly it will be treated as an existential threat. It will be associated with paedophilic terrorists faster than you can blink.
Both incidentally categories where I will never be happy with slopcode. But hey if anyone wants to use a slop-coded browser I just heavily suggest you never enter any passwords or personal information while using it.
We are actively building a history of cases where LLM usage correlates heavily with that slope you mentioned, but hey that’s OK, we aren’t allowed to call things out before they happen, judgement may only be passed once the damage is done right?
Out of curiosity, we know that LLM usage increases cognitive deficit and in some cases leads to psychosis. How many fatalities would you say is an acceptable number before governments act? How degraded do we let our societies get before we reign it in?
At some point the bubble is going to burst and we will see a number of countries bankrupted in the name of “AI” I’m really curious to see if we learn our lessons at that point. Should be interesting.
The point here isn’t necessarily that any particular use of LLMs is a good tradeoff (I can accept that many will not be especially when security and correct operation is very important), just that quantity clearly matters, to refute the point you were making earlier that it doesn’t.
I think it’s a mistake to consider all LLM usage as one thing, and that thing as some kind of sin to be denounced as a whole rather than in part, and not considered beyond thinking of ways to get rid of it (which is effectively impossible). There were people who had this attitude towards for example electricity, which is actually very dangerous when misused and caused lots of fires and electrocutions, but the way those problems eventually got mitigated was by working out more sensible ways to use it rather than returning to an off-grid world.
Your looking at this in a fundamentally different way, you seem to think it’s like electricity or indoor plumbing, where it’s primarily a benefit and enabler of further growth in society.
I see it like asbestos, or to borrow another posters example radium. A technology that has super narrow ETHICAL applications, but since we have elected to make it the only economic force that is driving large swathes of the world’s markets, we are in the jam it into everything and see how it works out phase. Humanity keeps on making this one fundamental mistake and because we haven’t completely collapsed society and killed ourselves en masse yet we keep on doing it thinking “this time it will turn out differently”.
I am trying to convey that this is a poison whose LD50 is microscopic, why do we as a society all have to experiment with dosing ourselves to find out how much we can take before it corrodes us to death?
It’s already taking a bite out of the computing landscape, it’s damaging the environment, its increasing the wealth disparity, its causing actual fatalities and its destroying the ability of people at large to think and retain information. Software development is probably one of the strongest cases for LLM usage, so please tell me how many untrustworthy browsers do we need to offset the above mentioned costs?
If we had focussed a similar level of effort, and money, into transitioning away from fossile fuel based energy grids as we have on this nonsense the world would be in a better place, but it doesn’t allow for the malignant growth of wealth to the 0.01% percent so it could never happen. Please make me understand why this is a good thing?
I think you’re in a way conflating two problems. One is policy and private investment in AI and the other is personal LLM use.
The first is stupid irrespective of the underlying economic system. They’re literally using environmental resources that are scarce and our taxes to fund these unnecessary data centers.
The second is something different. I think of it as being able to hire someone to do something for you. You could hire someone to do your homework for you which would be really stupid because it’s essential you do that. Most people didn’t do it not because they had restraint but because it cost too much. With LLMs you just reduced the cost.
I honestly think if the public subsidization of LLMs stopped and policy made them actually pay taxes and environmental regulation fines or for mitigation measures, we would see the actual cost and most frivolous use will stop.
It’ll probably be limited to institutional use and helping doctors summarize notes etc.
The other aspect is probably correct safe guard for the tool. You don’t need to ban calculators for ever. You just need to ban them at stages of education where the students need to practice the underlying operations to learn, i.e. probably up to the end of high school.
I think that the problem, in both cases, is culture.
It’s not that either of those are bad, or bad for people; it’s bad for people of this culture or people of this society. It’s how the two intersect that is the problem.
It could be a tool that lifts up the worker or creative, but instead it’s a tool to devalue the creative and extract power and wealth.
It highlights that people with power get a different set of rules and laws than the rest of us, and they’re using that to further entrench and enrich themselves.
And it’s so noisy. We are already losing bug bounties, it’s swamping open source projects in poor quality or even counter productive “work” on github to get recognition, its drowning out the work of creatives, its invading so many aspects of life (education, communication, research, public policy) and its fundamentally a bad tool for so many of those areas.
I recently applied for a job and got some advice from a friend who works HR in a different industry. His advice, see if you can find out which LLM they use and run your application through it. A lot of positions are getting huge numbers of applicants so they are using LLMs to generate the short list for interview, you could have the absolute perfect application but because the LLM doesn’t like the way you wrote it you are thrown out of the pool without a human being ever seeing you. It’s so insidious, by being “helpful” it reinforces its necessity.
I think it kinda depends on the context. If someone is just making a tool for themselves and they slap on MIT or GPL3 just because who cares someone else can have it, then sure. Who cares if it’s trash if the stakes are so low that they’re scraping the ground and the user base is expected to be single digits.
But when you care about the reputation of your project, or if your project requires people trust it, then yeah for sure it’s not appropriate to vibe/slop it.
I have ethical concerns about the realities of how this tech is used, mainly in what it’s doing to the economic and power dynamics in society. But I don’t have a problem with the tech itself. That said, I have to admit that it may not be realistic to separate the tech from its inevitable impact. Now I have become death, the destroyer of worlds, and all that.
How do people gain the ability to make these major projects if not for cutting their teeth on the small ones though. We cut the apprentice and journeyman stages of mastering an art out, replace it with slop, and then ten years from now we wonder why kids these days are so incapable of actually creating anything.
I have talked to kids who have told me that the assignments they got at school were so trivial they just ran them through ChatGPT rather than waste their time. When I pointed out that the reason the assignments were “trivial” was to give them the skills and confidence to do the big projects when the time came I got, at best, blank looks.
I said it somewhere else, if you are using an LLM to generate unit tests I find it hard to be terribly mad at that. If it’s scaffolding documentation, meh whatever. If it’s generating the main body of your project, I have concerns. Plus I circle back to how can you open source code that may have been stolen from a copyrighted work?
I did a better job explaining my position in another comment, the problem is one of culture. We live in a culture that pressures people to use AI in this bad way, and pressures the creators of AI to court bad people as customers, and throw away their ethics. If we weren’t in a rat race, I feel like a lot of the problems would go away.
But we live in the culture that we live in, and at some point you simply cannot practically view the technology in isolation.
The problems are human nature, capitalism and greed. Doesn’t mean we have to give in, and frankly all the appeasers out there that keep saying “You have to use it or you will be left behind.” are effectively the drug pusher in the locker room telling the insecure young man “Oh yeah everyone else is juicing, you don’t do it you won’t be able to compete.”
Nobody believes the drug dealers are handing out drugs because they are humanitarians, they have a financial interest in destroying that kids life while he tries to justify it to himself.
We know LLMs are harmful on SO many different levels, but the US economy would literally collapse if people acknowledged that and stopped supporting them. So we race headlong towards societal collapse to keep the plates spinning. Sam Altman, Jensen Huang, Elon Musk, and so many others should all be tried for genocide and crimes against humanity once the collapse occurs. The sooner our societies start stringing these monsters up rather than celebrating them the more hope we have as a species.
I agree with everything you said except that I think too much nurture is attributed to nature. I don’t think it’s human nature, i think this is the nature of our culture. To say it is human nature is, imo, unnecessarily fatalistic.
I’d love to be wrong, but I feel like we are wired in certain ways by the evolutionary process we are the product of. I think the nurture comes in to play with regards to overcoming some of those baser instincts and drives. Anyone who has raised boys can tell you that for most boys they go through phases of being overly aggressive and or violent, that can often be redirected into better ways of getting that out. Can’t speak for girls or people on the intersection due to lack of first hand experience and want to reiterate that I am fully aware that my anecdotes are not universal and everyone falls into a range of behaviours. I feel like what we lack is an elder species we can look up to and emulate, so we are going to need to figure it out for ourselves. I like to think we have the ability, here’s hoping.
What is your argument that that phase of boyhood is nature rather than nurture?
Kids that age are typically emulating their older peers, and things they’ve seen at school, in media, at home, in public, etc. if anything, I think that the behaviour difference we observe between adolescent boys and girls suggests that kids absorb gender roles very early. Even from before they can walk, the typical common toy selection differs greatly; girls get toys that teach them about working with people and caring, but get toys that teach them about manual labour(?!?!). Even if you don’t do that with your children, at school and daycare they’re surrounded by kids who are raised like that.
When my son was a preschooler, he loved to wear dresses, but as he approached school age he would wear them less and less, and completely stopped since he started school. I don’t think he grew out of it and we didn’t tell him to stop, but he learned that lesson from his peers.
All the abilities that set humans apart from other animals are social in nature, humans evolved to help each other (at least in small groups)
That’s going to be a hard one to provide definitive answers on. Anecdotally though I saw it with my younger brothers, I saw it with my kids, I have seen it with my friends kids. I am convinced enough, but I will be swayed if someone with the right research and peer reviewed science can make a compelling case. If I had all that I would be presenting my thesis as a best selling book on behavioural psychology, finding a university that would offer me tenure and relaxing into a cushy life of academic celebrity, not posting on Lemmy.
what about that convinced you it’s nature?
All those boys were raised in a similar culture with similar influences regarding how boys should behave. You don’t have a control group.
It’s a powerful tool that people are using without restraint. I think this to be expected in the first few years after any new powerful tool is found. Humans will find a way to mess it up.
See radium cosmetics and ideas to dig the Panama canal using hydrogen bombs. Social media is probably as much or even more dangerous than LLMs.
But they aren’t distinct things, they are both heads of the same capitalism hydra. How much of the training data for these LLMs has been harvested directly from Social Media? I sure as shit don’t know and I would argue nor do many other people.
Radium is probably a good analogy actually. Thank you. It’s toxic in almost every application we can imagine, it’s got a legacy that extends out to the current day, it formed a massive economic block, and it turns out it should only ever have been used under the strictest controls. We should never have had “entrepreneurs” being the driving force behind it.
It should have ALWAYS been a controlled substance that required people who understood and respected how fucking dangerous it is. Instead we are intent on jamming LLMs into every aspect of life regardless of how badly we suspect and/or know it will fuck everything up.
Unfortunately I don’t think caution is a virtue that is rewarding in most circumstances to most people. New tools need to be extensively and rigorously tested before being used.
I don’t even think it’s a individualism/capitalism thing unfortunately. I’ve been in cultures/societies that are not either and both still use these tools to further their goals. It’s just power at the end of the day.
It’s like the nuclear bomb. It doesn’t really matter what the underlying economic system of US or USSR were, they still used it to further their goals.
I think the insidiousness is in the power of the tool. For most people it’s just too powerful to not use. I can be an excellent photographer or artist and not make a dime if I don’t engage in social media.
For me that’s the sad thing. Self-hosted small models have been extremely useful to me to perform selective tasks that completely changed how things work. It’s allowed me to manage my research and information processing so much better. But I also know most people don’t put any limiters on it and use it for anything and everything.
I have played around with a bunch of tools at a self hosted level. The big thing I found puts inherent brakes on the process is the technical capability to actual use them, when I played around with ESRGAN to upscale images I was limited in application by time and equipment, I achieved better results than I could have on my own, markedly worse results than if I had the technical ability and equipment to just reshoot the images with better resolution.
I tried some photogrammetry, similar outcomes. I could have done better by being better with Blender. NERFs as well.
What we have is people yelling “Monorail! Monorail!” And using free credits or buying them.
The industry is already losing obscene amounts of money and the actual use cost is still entirely obscured from the general public. Once enough of the world is hooked on using LLMs for everything we are going to see the true costs emerge, then it will be another iteration of the haves and the have nots, society as a whole cannot afford to make LLM usage profitable, where does that get us?
Damn 99% of the time someone says not to use an open source product it’s because of some obscure drama unrelated to the actual program.
But in this case the dev appears to not just be using AI code (not great but debatable) but using mostly AI code and using AI to reply to bug reports. Not something the average person wants to be running in a live environment.
I haven’t used Booklore but the excitement around it was nudging me there. I think I’ll stick with CWAs slower rollout.
Thanks, that might explain the jank I got when spinning it up yesterday…I’ll be back on calibre web or trying another option over the weekend.
Tbh, at the moment the maintainer seems to be have gotten the message - or at least tries to make it seem so. I would give him the benefit of doubt at this stage, at least for a while now.
so if someone had a meltdown and started slapping people, you’re willing to give them a pass?
I mean they seem like they’re sorry. /s
dude isn’t regretful of his actions. he regrets the reactions from the community.
being a FOSS dev is like being a merchant, trust is the only commodity you should be dealing in. if you, or your code, can’t be trusted there’s nothing for the community to rally around.
Wow, I was thinking about switching from calibre-web soon too… Thanks for the headsup!
I wonder if it’s just an out-of-line openclaw deleting the discord to silence the humans that don’t like its code.
Too fucking bad, pussy.
Hey, pussies are great, don’t compare them to that idiot.
Fair point.
Booklore is actually good though.
Much more usable than those others
Seems the guy has calmed down too.
when people tell you who they are, listen.
Classic. Another one bites the dust…
Good thing I decided against switching to it, even though my main reason is that my weird book organisation scheme isn’t feasible with anything but calibre or manual organisation currently as far as I know