text
stringlengths
0
23.7k
label
stringclasses
4 values
dataType
stringclasses
2 values
communityName
stringclasses
4 values
datetime
stringclasses
95 values
That's fair, but do you think people will actually read it?
r/technology
comment
r/technology
2024-16-06
This? No. But the more noise there is about this, the more the idea will seep into general knowledge.
r/technology
comment
r/technology
2024-16-06
So I know the original paper by Frankfurt, which is indeed a classic, and one difference seems to be that the human bullshitter is aware of their own indifference to truth; they know what they are doing. From that point of view, ChatGPT doesn't even qualify as bullshit because there's no intentional attitude present in the system. As other people have pointed out, its just some algorithms at work calculating probabilities. It isn't 'responding' to or 'answering' anything. User input causes the algorithm to run, that's all; what we naturally read as text is not text from the algorithm's point of view, because the algorithm doesn't have a point of view at all. We can't help thinking about AI in anthropomorphic terms but that's actually very misleading with respect to what's really happening on the computational side.
r/technology
comment
r/technology
2024-16-06
Could you explain?
r/technology
comment
r/technology
2024-16-06
That's hilarious. I always loved Penn and Tellers Bullshit. Great show and that is the exact definition they use. I never thought of applying it to AI. It's perfect.
r/technology
comment
r/technology
2024-16-06
Exactly modern ai aren’t functionally different from a random name generator. Yeah they are more complex but ultimately they are “learning” patterns then spit out things that in theory should match those patterns. Yes the patterns are vastly more complicated than how to construct a name according X set of guidelines, but it’s still functionally doing the same thing.
r/technology
comment
r/technology
2024-16-06
It can be search engine informed though. Essentially, the answers an llm gives you is based on the information it has access to. The main model functions in many ways more or less as you say, but actual ai products add context to this. Some truly use normal (or normal-ish) search, such as copilot. Other use very specific context inputs for a specific task, such as github. And then you can build your own products, using some form of retrieval augmented generation to create context for what you are looking for. At those points, you are actually using search to first find your information, and then turn that information into whatever output format you want. Essentially, if you give the model more accurate data (and less broad data) to work with, you get much more accurate results.
r/technology
comment
r/technology
2024-16-06
Logic has almost nothing to do with it unless you use the word really loosely. It forms complete sentences, and often not even coherent.
r/technology
comment
r/technology
2024-16-06
I heard these arguments many times. I trust well-known scientists more then random dude who claims to have two degrees. >That said, nothing ChatGPT does has anything to do with knowledge. ChatGPT has no understanding of anything, it has no concept of anything. How do you proof you have understanding of anything and concepts in your head? >It's just calculating a string of letters that is what would be statistically likely to answer the input (e.g. question). "Just calculating" is a strong words. If you have a degrees as you claim, you should know that current neural networks are pretty far from the perceptron samples from 197x. Inside neural network the parameters form their own related subnetworks which, for all we know, can be the similar to a way humans store information in our brain. >When it strings the letters c, a & r together, it has absolutely no understanding of what a car actually is. If you ask it what a car is it can string together some letters that will likely be something a human can read and use to understand what a car is, but the LLM itself has no mental representation of a car. It has no understanding of anything, it's mindless. I wonder how you can say it with such certainty when problem of understandability is not even close to be solved, so we basically don't know how anything is represented inside LLM. Again, when you say human "can understand", tell me what it means. Give definition and then show how it is provably different from what LLM does inside it. >Like what the fuck is your dumb ass comment trying to say? Trust me you don't need a Philosophy degree or a Nobel prize to understand that the only thing an LLM "knows" (in a very lose interpretation of the word) is how to calculate strings of letters that are statistically likely to match what a human might reply to a prompt. There is no reason to believe that humans when replying to a simple questions on a base level do the different thing. My comment may be dumb, but at least I am not overconfident beginner who thinks they know more then top researchers in the field.
r/technology
comment
r/technology
2024-16-06
And we sometimes don't or use it in a funny context which is why it gets things wrong. It's only as good at its training data.
r/technology
comment
r/technology
2024-16-06
I'm sorry, but their argument is a little too simplistic. You could take their whole argument and level it at the whole practice of inferential statistics. Is the practice of inferential statistics bullshit because there's no a priori reasoning? This is my view: it’s not that ChatGPT is qualitatively different from anything in inferential statistics. It’s how the outputs are framed and consumed. In inferential statistics, outputs are usually framed and consumed as estimates—e.g., margin of error and statistical significance. In LLMs, every predicted output is also an estimate in two ways: in the first way, **it represents** the best estimate of relatedness to the prompt, and in the second way, **it reflects** the probability distribution over possible responses. However, we don’t perceive it that way because: 1. The output isn’t framed with any of the indications of estimation. 2. The output itself is language, which can imply itself to be something it’s not. For example, “the answer is not X” is a series of guesses—each character letter was a best estimate. But the output-as-language implies a certainty it never possessed, and implies many other things too, like a process of logical reasoning. Imagine if every LLM response came with two overall estimates: relatedness to the prompt, and relatedness to correctness. The way we perceive and consume LLM outputs would be totally different.
r/technology
comment
r/technology
2024-16-06
When you show a program both sides of every coin, it can only show them back to you.
r/technology
comment
r/technology
2024-16-06
What is VC? Thx
r/technology
comment
r/technology
2024-16-06
Many people are quick to dismiss AI as a contextual probability system, while ignorant of the simple fact that's basically how neural nets (including our own) function. AI and humans don't quite do the same things, or have many of the same issues... but weirdly, they do many of the same things and have many similar issues! The real problem of AI is less the problem of AI and the problem of how we actually go about ascertaining high quality data and truthfulness, and go about training that into neural nets (both us and machine based).
r/technology
comment
r/technology
2024-16-06
So like a politician
r/technology
comment
r/technology
2024-16-06
Ya my wife is in marketing and uses it. She’s worried ppl are going to think she’s useless if she’s using it. I told her it’s a tool to supplement her work, not replace her. Like when nail guns came out nobody claimed roofers were suddenly worthless. The nail gun just made them faster and more efficient than the nail and hammer, but the roofers are still the ones doing the work not the tool
r/technology
comment
r/technology
2024-16-06
The main differences between me and AI is that I can reason and that an AI uses approximations of how a brain works, it isn't *how* a brain works. What I get tired of is people's insistence on anthropomorphising mathematical models that can't think, reason or even know that there is a world behind the symbols that it uses to create sentences. I can change your mind. You can change your mind. An LLM cannot change it's training data nor it's training. It needs to be retrained. Humans can change their neural pathways in real time.
r/technology
comment
r/technology
2024-16-06
Yeah things like this are why I still struggle with how I feel about anthropomorphic language with AI. On the one hand, it makes it too easy to ascribe actual “intent” and “will” to a pile of sand we figured out how to make play cool tricks with electricity, on the other hand, we already do it everyday as a linguistic shortcut: “the dishwasher doesn’t like to be overloaded” or “my car gets cranky when it’s too cold out” - people aren’t thinking your dishwasher or car literally have inner lives and opinions, but it’s often easier to communicate in these terms. Hallucinations I feel the same about. They share a fundamental trait with human hallucination that I think is part of the key to understanding them: to be a bit reductive, humans hallucinate when our brain looks for and find patterns in the background noise, especially when theres a lack of an actual meaningful pattern to find (think of sensory deprivation tanks). AIs, like us, are good at pattern matching to a fault, and a hallucination can be thought of in both cases as finding a pattern that’s not there in the noise (the analogy is a little cleaner with image diffusion models but at least conceptually applies just fine to transformer-based LLMs). What’s interesting is that this suggests there could be a whole class of similar misbehaviors we aren’t fully aware of yet, and also (in part) explains why RAG can be a good tool to combat hallucinations: you’re giving it a hook into an actual signal so it doesn’t make one up from the background noise.
r/technology
comment
r/technology
2024-16-06
Did you try something more obscure or specific like how I described? I updated my comment, as well.
r/technology
comment
r/technology
2024-16-06
Question: is it reasonable to see LLMs as a rough equivalent to the language centers of the human brain? As a thought experiment, if one were able to grow the language areas of a human brain in isolation in a jar and provide it with vast examples of language, would the contents of the jar provide similar results if queried? Because of the hype around LLMs, we may be having the wrong discussion and be expecting a subsystem of cognition to perform as a complete system. It's not going to succeed, but that doesn't mean it's still not a major advance.
r/technology
comment
r/technology
2024-16-06
I'm not currently threatened, it's not been implemented in any particular way because no studio will go near it due to copyright issues. Directors won't go near it because it cannot produce consistent, highly specific results based on their notes. My main concern is heads of major studios frothing at the mouth about how it'll replace all us pesky artists. It's not about the AI's capabilities but the strong will of the studios to get rid of us as soon as possible
r/technology
comment
r/technology
2024-16-06
I don't know. I think an open ai researcher writing a paper saying agi is 3 years away (while resigning so not hype pumping), indicates these systems are more than just probabilistic match making. https://www.windowscentral.com/software-apps/former-openai-researcher-says-agi-could-be-achieved-by-2027 Or perhaps we're understating what probabilistic matching is happening in our own brains. When the system is able to cluster concepts together through contextual analysis it's a form or understanding similar to child brain development. Is it fully understanding, no of course not, but it's also not as far off from us as a statement like "it's just guessing the next best word based on probability".
r/technology
comment
r/technology
2024-16-06
What do you mean? We say AIs "hallucinate" because it appears on the surface as being very similar to hallucinations experienced by humans. Thats textbook anthropomorphism.
r/technology
comment
r/technology
2024-16-06
That’s exactly what a lot of people come to Reddit for. 
r/technology
comment
r/technology
2024-16-06
That all goes out the window when its trained on reddit
r/technology
comment
r/technology
2024-16-06
Chomsky is a hack outside linguistics and even in comp linguistics, it is debatable whether he is relevant anymore. > ChatGPT doesn’t actual learn anything from its massive dataset, only prediction on the appropriate response What an idiotic statement. That meets the definitions of learning. > Chomsky did provide example Okay then answer what was asked - define the concept and provide a scientific test. > 'Really understanding' is not a well-defined concept and rather someone people use to rationalize. > If you think otherwise, provide a scientific test to determine if something is 'really understanding' or just 'pretending'.
r/technology
comment
r/technology
2024-16-06
> at this point it's worse than useless... it's misinformation. Then why have I been able to use it successfully for every single thing I tried? You're just as bad as AI's making up and stating shit as fact.
r/technology
comment
r/technology
2024-16-06
OK, so I finally actually got a chance to go through this in more detail as well as look at some of their citations. I definitely agree that careless anthropomorphisms can lead us to misunderstanding models in ways that at best are helpful and that can cause harm. But my problem here is that they’re looking to replace the term with “bullshit” which is itself an anthropomorphism. It’s just replacing one flawed analogy with another. I absolutely take their point that in many cases the distinction between bullshit and lying is important to understand for these models. I actually like the suggestion of “confabulation” which they reject as being too anthropomorphic (though I don’t find it any more so than “bullshit”. I’ll counter their argument that AI isn’t hallucinating (sure it isn’t literally but neither is it literally bullshitting). One avenue to human hallucination is sensory deprivation: our brains crave pattern matching, so when there’s no signal, the brain will amplify any background noise until it looks like signal. In much the same way, LLMs look for patterns in analogous ways and if they don’t find the right information are prone to boosting the information until they find something that looks like a pattern, even if it’s just background noise. There’s a lot to nitpick and lots of threads to pull there, but I think that’ll be the case with any analog to human behavior, including “bullshitting” In truth LLMs do none of these things, and neither analogy is perfect, but they’re both useful ways of thinking about LLMs and both have their place in understanding AI “behavior”.
r/technology
comment
r/technology
2024-17-06
Right. The AI just learned everything by itsself and wasn't fed any information. That is totaly how this works.
r/technology
comment
r/technology
2024-17-06
The point wasn't that you should use it to formulate arguments for a case. It was that you can use it for some tasks, like finding errors in legal arguments because the training data covers this type of procedure and there is ample examples of how to do it. But I'll bite on this question: > How well, do you suppose, an algorithm would do to keep you from death row? First off, pretty much all lawyers are using "algorithms" of some sort to do their jobs. If they use any software to process documents, they're using a search and sorting algo to find relevant information because it's much faster and more accurate than a person trying to do this. Imagine if you had thousands of pages of docs and had to search through it by hand. You'd likely miss a lot of important information. I'm assuming you mean language models, which I'll refer to as ai. This is also dependent on a lot of things. Like, how is it being used in the development of the arguments and how much money do I have to pay for a legal defense? If I had unlimited money, and could afford the best defense money can buy, then even the best team of lawyers will still not be perfect at formulating a defense and might still miss valuable information, but I would chose them over AI systems, although it wouldn't hurt to also use ai to check their work. Now, if I had a public defender who isn't capable of hiring a hoard of people to analyze every document and formulate every piece of the argument, then I absolutely would want AI to be used because it would help my lawyer have a higher chance of winning. Let's say we have the AI analyze the procedural documents and check for violations, or evidence for flaws. Even if my public defender is already doing this, they may miss something that would free me and having the ai be an extra set of eyes could be very useful. Considering how expensive a lawyer is, this tool will help bring down the cost and improve outcomes for people who can't afford the best legal defense available, which is most people.
r/technology
comment
r/technology
2024-17-06
r/technology
post
r/technology
2024-15-06
Now mansplaining mansplaining is peak reddit. You win the internet today.
r/technology
comment
r/technology
2024-15-06
God this is sad
r/technology
comment
r/technology
2024-15-06
Well you are seven after all. And you had no reason to drop by either.
r/technology
comment
r/technology
2024-15-06
It's easier and more accurate to use the earth's magnetic field and your phones compass. It's already commercialized.
r/technology
comment
r/technology
2024-15-06
Go back to TikTok
r/technology
comment
r/technology
2024-15-06
Today is your moment to shine though.
r/technology
comment
r/technology
2024-15-06
I posted as I was wondering "gee, how they gonna overcome these problems?" But writing that just invites redditors to get argumentative and reply, *"Read the Article!"* How could I know I was replying to *exactly* that type of thin-skinned person?
r/technology
comment
r/technology
2024-16-06
Yes it does.
r/technology
comment
r/technology
2024-16-06
How does a magnetic compass tell you where you are?
r/technology
comment
r/technology
2024-16-06
GPS does not requiring mapping for positioning, and has far higher accuracy, though magnetic field location can be superior if occlusion is a concern
r/technology
comment
r/technology
2024-16-06
How can anything tell relative movement without an outside reference? The earth is revolving, while rotating around the sun, in a solar system rotating within our galaxy, traveling away from center of the universe. This series of corkscrewing motions is only hidden from us by the balanced gravity systems. I presume this is another version of accelerometer.
r/technology
comment
r/technology
2024-16-06
Wasn't this a plot device in "The Big Bang Theory"?
r/technology
comment
r/technology
2024-16-06
You can't apply ITAR to technology universally, if you can't actually control it you can't ITAR it. There's lots of technology people would like to put on export control, but you can't do it effectively unless you can actually control the technology. We've seen that with sensing technology a lot now both for night and thermal vision that have strong export controls... that are made laughable by the technology being readily replicated and now available and often bested by tech from civilian markets in markets that can't be controlled. It doesn't even help you internally because the borders aren't closed. So you can order and have delivered cameras that would fall under ITAR from the US direct from China, and hilariously not then be able to export them again because apparently "China" might get them...
r/technology
comment
r/technology
2024-16-06
The best INS available cost multiple millions and still get pretty significant drift. In aircraft they're largely phased out as they don't need the system anymore. Submarines still use them, but if you aren't taking regular fixes and correcting it's still picking up a good bit of drift over time. These don't drift, they should theoretically be cheaper, they should be much smaller in the long rub and they are at least an order of magnitude more accurate.
r/technology
comment
r/technology
2024-16-06
In stations, but they won't pay for the stuff for inside the tunnels, it's very very expensive.
r/technology
comment
r/technology
2024-17-06
r/technology
post
r/technology
2024-15-06
Correct. And whenever I read about something like this, every now and then I get an unavoidable urge to slap myself for my damn stupid morals. If only I didn't have them. I've could've been rich 10 times over.
r/technology
comment
r/technology
2024-15-06
The next war is here, whether you want it or not.
r/technology
comment
r/technology
2024-15-06
Was there peace in the world after Japan got nuked twice?
r/technology
comment
r/technology
2024-15-06
With Japan? Yes, yes there was.
r/technology
comment
r/technology
2024-15-06
Proof? That’s a very extreme thing to say
r/technology
comment
r/technology
2024-15-06
Hahahahahahahahahahahahah No.
r/technology
comment
r/technology
2024-15-06
I’m confused, are surgical machines connected to the internet?
r/technology
comment
r/technology
2024-15-06
Why the fuck would Israel do this to the UK? It makes literally no sense at all. They are our allies while we are involved in a war with Russia. How is it possible to be this stupid?
r/technology
comment
r/technology
2024-15-06
The UK has political reasons for letting the world know its being hacked by Russia and it also has legal reasons for admitting to being hacked, the NHS legally has to let people know this has happened. Russia on the other hand is not an open government and it has trouble admitting that things have happened to it that might indicate its failing. So we have no way of knowing if the Russian institutions have been effected by cyber attacks as they aren't going to admit it.
r/technology
comment
r/technology
2024-15-06
Schedules, appointments, which patient needs to go where, which patient is allergic to which meds, which patient is in front of me for which operation…the fast majority of that info is stored on PCs that they may have lost access to because of the attack. Without that info, they can’t risk certain procedures.
r/technology
comment
r/technology
2024-15-06
They stopped using filing cabinets when PCs became the standard. The NHS had been desperately trying to get rid of paper processes in favour of electronic ones for years now.
r/technology
comment
r/technology
2024-15-06
Japan is partly famous for its pacifist constitution post WW2. The 80 years since the end of WW2 has been significantly more peaceful than any other 80 year period in human history. Expecting no wars is not a reasonable position and not worth arguing with you over.
r/technology
comment
r/technology
2024-15-06
> There’s no perfect defense sadly. Exactly, hence the benefit of deterrence. That said, there's a difference between "one mistake" and "the entire environment is a complete disaster", and I think their environment is much closer to the latter. It also doesn't just take one mistake, it takes the attacker finding your one mistake. So fewer mistakes does provide a massive benefit, and at some point it turns into "you don't have to outrun the bears, just enough slow members of the group that the bears are too busy eating them to get to you".
r/technology
comment
r/technology
2024-16-06
It is impossible to build a system without vulnerabilities
r/technology
comment
r/technology
2024-16-06
https://en.wikipedia.org/wiki/Nirvana_fallacy
r/technology
comment
r/technology
2024-16-06
Go look up *flippant* and then shut up
r/technology
comment
r/technology
2024-16-06
It's not an example of an attack against Russia through, so I expected you to dismiss it. I was particularly annoyed by the claim "If examples can not be provided then such attacks do not exist." because such attacks are often kept secret. Absence of evidence is not evidence of absence in general, but *especially* when it comes to hush-hush operations like this. We'll probably learn about *some* of the less sensitive ones in a few decades, likely after the 25 years when stuff gets declassified by default (I bet most of the operations will have their records either disappeared or exempted, and we'll never hear of them). A hilarious case of a cyberattack against Russia (but not by the US) was when Dutch intelligence pwned a Russian state sponsored hacking group, broke into the camera system in the building, and then publicly released the footage. https://apnews.com/article/ef3b036949174a9b98d785129a93428b
r/technology
comment
r/technology
2024-16-06
Yeah because clerical errors couldn’t possibly happen with loose papers…
r/technology
comment
r/technology
2024-16-06
You would have no idea
r/technology
comment
r/technology
2024-16-06
> I don't know why the UK wouldn't just trigger Article 5 of NATO considering it's an attack on their home-territory which will cause deaths. If UK doesn't provide evidence that this was conducted by the Russian military, or on behalf of them, it would be hard to justify to the people. Since obviously Russia will deny, and the publicly known fact that its a Russian speaking group doesn't necessarily implicate the government of Russia. Imagine if an English speaking criminal organisation conducted an attack on Russia. Would that be an act of war by the UK against Russia? And I said "provide evidence", rather than "find evidence", since the UK could even find evidence that this was conducted by Russian military, and still be reluctant to share it. Because that could show their ability and methods, and compromise their ability to collect more information in the future.
r/technology
comment
r/technology
2024-16-06
It’s not the hospital - the Conservative government has over the last 14 years systematically and subversively withheld money from the NHS and done everything it can to kneecap it through backdoor privatisations. Hospitals cannot afford to do these things when fighting a government who resents their existence because it undermines their ideology.
r/technology
comment
r/technology
2024-16-06
The hospitals were not hit by ransomware, it was a private company called Synnovis who did lab tests for the hospitals.
r/technology
comment
r/technology
2024-16-06
r/technology
post
r/technology
2024-15-06
Because some people (or one person with multiple accounts) is shilling *hard* for Chinese industry. Or being paid to promote the Economist.
r/technology
comment
r/technology
2024-15-06
Don't be sad hearted bc of these bots. The economist is an excellent source and a recommended reading venue for a MBA or similar degrees.
r/technology
comment
r/technology
2024-16-06
Hint: it’s the Chinese government
r/technology
comment
r/technology
2024-16-06
r/technology
post
r/technology
2024-15-06
jokes on them, my TV is plugged into my PC, through a VPN and PiHole.
r/technology
comment
r/technology
2024-15-06
>Not being able to use the trye Firefox in iOS in exchange for the privacy Apple offers is a decent trade off. This has always been about competition suppression and they've just managed to market it [successfully, based on these comments] as "privacy".
r/technology
comment
r/technology
2024-15-06
> It needs to be opt in with people not opting in still able to use the service. I dont think that could ever pass constitutional muster. I'm not sure there's any service that has to be provided against the providers will, except maybe emergency room care. Businesses usually have the right to not do business with you if they want so long as they're not discriminating based on a protected class.
r/technology
comment
r/technology
2024-15-06
The BBC and ITV have actively assisted in preventing this by only showing really shit programs, so no-one wants to watch TV.
r/technology
comment
r/technology
2024-15-06
well that gave me a laugh. Apple pretends to give a shit about privacy but their "controls" do nothing except give a false sense of security. And makes sending personal data delayed to when the device is idle.
r/technology
comment
r/technology
2024-15-06
Like 90% of my use of my phone is the browser. I'm not willing to give up control of that.
r/technology
comment
r/technology
2024-15-06
Thanks, I'll check it out.
r/technology
comment
r/technology
2024-15-06
Shit, they're watching Rhett and Link AGAIN...
r/technology
comment
r/technology
2024-15-06
I should say you won't see ads in your launcher, you'll still get ads in other apps, but SmartTube Beta cures that for YouTube, and there's some app for Twitch I don't remember. Hulu/Netflix/Paid for is still it's own thing. Jellyfin, maybe?
r/technology
comment
r/technology
2024-15-06
We’re almost in 1984.
r/technology
comment
r/technology
2024-15-06
There’s also a couple blocklists you can add to PiHole specifically for smart TVs, one such list can be found [here.](https://www.reddit.com/r/pihole/comments/ovw81t/i_found_a_blocklist_that_blocks_99_of_the_ads_on/?utm_source=share&utm_medium=ios_app&utm_name=iossmf)
r/technology
comment
r/technology
2024-16-06
> If only all if the apps I use were available for Apple TV, then I would absolutely disconnect my Samsung TV from the network. Unfortunately, there’s one streaming app I occasionally use that is only available in Android TVs. If it's an app that's accessible on a computer then your best bet is to just connect a laptop via HDMI and invest in a bluetooth mouse/remote. I had an older smart TV that just got slower and slower before I finally got tired and factory reset it and never re-connected to the internet. Ran beautifully for years that way.
r/technology
comment
r/technology
2024-16-06
open source everything
r/technology
comment
r/technology
2024-16-06
Ok how do I stream then? Is Roku any better? What third party device do you suggest?
r/technology
comment
r/technology
2024-16-06
you serious? the us is notoious for illegaly spying on its citizens. you can discuss laws all you want in congress ect, not like it means anything if the agencies just do it either way.
r/technology
comment
r/technology
2024-16-06
If you have to do streaming, making your own AndroidTV box with a Raspberry Pi. If you don't want to do that, you could get a Google Chromecast, but that'll track you. Best option is to build a media center and set sail for the high seas.
r/technology
comment
r/technology
2024-16-06
Cool but it’s technically illegal which is a big difference to making it explicitly legal and requiring companies to make backdoors.
r/technology
comment
r/technology
2024-16-06
Bought my tv in 2019. I use it as a display for my pc. It has never been connected to the net and never will be. I watch what I want and no ads.
r/technology
comment
r/technology
2024-16-06
Yes, but people on the internet don’t seek to understand nuance anymore and are just happy being fearful of everything.
r/technology
comment
r/technology
2024-16-06
Yes I would like to see this.
r/technology
comment
r/technology
2024-16-06
So depressing. :(
r/technology
comment
r/technology
2024-16-06
I hope they notice that I mute all political ads and finally leave me the fu k alone
r/technology
comment
r/technology
2024-16-06
r/technology
post
r/technology
2024-15-06
r/technology
post
r/technology
2024-15-06
I thought they already had satellite communications for emergency location services. Is this something different or just more generic messaging?
r/technology
comment
r/technology
2024-15-06
Same communication system. Different context. The emergency one can self-trigger in a car crash and generate responses.
r/technology
comment
r/technology
2024-15-06