

they stop selling parts quickly
That’s weird. If they stopped making parts how did I get a replacement battery for my fairphone 3?
they stop selling parts quickly
That’s weird. If they stopped making parts how did I get a replacement battery for my fairphone 3?
Have a look at their impact report. They themselves claim that they don’t spend more than €5 per phone on fair trade or environmental stuff.
I’ve looked through their report and I can’t find this info. The only thing I’ve found is a ~€2 bonus per phone to their factory workers, which is only a small fraction of a phones supply chain. Can you provide a more detailed reference supporting your claim?
Wirelessly.
FairPhone doesn’t do wireless charging.
A big problem they have is that they have to rely on Qualcomm for security updates, and the flagship chips simply don’t get 8+ years of support. Fairphone uses Qualcomms IOT chips, which come with much longer support.
Sure, but it’s not more valuable than $30 + regular price increases for 60+ years. That’s what a lifetime membership is.
Lets flip that around: For my own finances $300 is a lot more valuable than $30 for 10 years. So if I’m to expect that the company will go out of business in 10 years or so, I would have been better off paying for the subscription.
Lets also not forget that companies don’t take that $300 and responsibly invest it. It gets reinvested in a risky bid to grow the company and get enough people to subscribe in order to pay for your service going forward.
Lifetime services/updates are always a scam. The economics of this are really simple: Nebula is $30 per year or $300 lifetime. That lifetime membership covers only 10 years of subscription. So what’s the plan after that? There’s only really three outcomes:
Buying a lifetime membership you’re gambling that Nebula will grow big enough that other people’s subscription will pay for your service. Your membership is a liability for them.
It’s also bad from the other end. Lots of small software devs will sell lifetime updates but eventually need to abandon their products because they simply run out of money.
A service continually costs money to provide. You can’t pay for that with a single payment. Lifetime services are simply incompatible with running a business long term. It’s a bad idea and someone is always getting screwed.
Support for 2015 macs ended 7 months ago. Forget 10 years ago, my 2015 mac doesn’t run like it used to in Big Sur.
thanks to the founders that gave us compulsory, preferential voting
Not sure if sarcastic or not, but it made me look up when these things were introduced. Preferential voting was 1918 by Billy Hughes Nationalist Party. Compulsory voting was a state thing, starting with QLD in 1925 and ending with SA in 1942.
Not sure what you’re expecting that fuse to do when the battery is on fire from crash damage?
I’m more familiar with RISC-V than I am with ARM though it’s my understanding they’re quite similar.
ARM/RISC-V are load-store architectures, meaning they divide instructions between loading/storing and doing computation. x86 on the other hand is a register-memory architecture, having instructions that do both computation as well as loading/storing.
ARM/RISC-V also have weaker guarantees as to memory ordering allowing for less synchronization between cores, however RISC-V has an extension to enforce the same guarantees as x86 and Apple’s M-series CPU have a similar extension for ARM. If you want to emulate x86 applications on ARM/RISC-V these kinds of extensions are essential for performance.
ARM/RISC-V instructions are variable width but only in a limited sense. They have “compressed instructions” - 2 bytes instead of 4 - to increase instruction density in order to compete with x86’s true variable width instructions. They’re fairly close in instruction density, though compressed instructions are annoying for compilers to handle due to instruction alignment. 4 byte instructions must be aligned to 4 bytes, so if you have 3 instructions A, B and C but only B has a compressed version then you can’t actually use it because there must be 4 bytes between instructions A and C.
ARM/RISC-V also makes backwards compatibility entirely optional, Apple’s M-series don’t implement 32-bit mode for instance, whereas x86-64 still has “real mode” for running 16 bit operating systems.
There’s also a number of other differences, like the number of registers, page table formats, operating modes, etc, but those are the more fundamental ones I can think of.
Up until your post I had thought it exactly was the size of the instruction set with x86 having lots of very specific multi-step-in-a-single instruction as well as crufty instruction for backwards compatibility (like MPSADBW).
The MPSADBW thing likely comes from the hackaday article on why “x86 needs to die”. The kinda funny thing about that is MPSADBW is actually a really important instruction for (apparently) video decoding; ARM even has a similar instruction called SABD.
x86 does have a large number of instructions (even more so if you want to count the variants of each), but ARM does not have a small number of instructions and a lot of that instruction complexity stops at the decoder. There’s a whole lot more to a CPU than the decoder.
compressed instruction set /= variable-width […]
Oh for sure, but before the days of super-scalars I don’t think the people pushing RISC would have agreed with you. Non-fixed instruction width is prototypically CISC.
For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).
If you can simplify the instruction decoding that’s always a benefit - moreso the more cores you have.
Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley.
You’ll get no disagreement from me on that. Maybe you misunderstood what I meant by “CISC-V would be just as exciting”? I meant that if there was a popular, well designed, open source CISC architecture that was looking to be the eventual future of computing instead of RISC-V then that would be just as exciting as RISC-V is now.
The original debate from the 80s that defined what RISC and CISC mean has already been settled and neither of those categories really apply anymore. Today all high performance CPUs are superscalar, use microcode, reorder instructions, have variable width instructions, vector instructions, etc. These are exactly the bits of complexity RISC was supposed to avoid in order to achieve higher clock speeds and therefore better performance. The microcode used in modern CPUs is very RISC like, and the instruction sets of ARM64/RISC-V and their extensions would have likely been called CISC in the 80s. All that to say the whole RISC vs CISC thing doesn’t really apply anymore and neither does it explain any differences between x86 and ARM. There are differences and they do matter, but by an large it’s not due to RISC vs CISC.
As for an example: if we compare the M1 and the 7840u (similar CPUs on a similar process node, one arm64 the other AMD64), the 7840u beats the M1 in performance per watt and outright performance. See https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_7_7840u-vs-apple_m1. Though the M1 has substantially better battery life than any 7840u laptop, which very clearly has nothing to do with performance per watt but rather design elements adjacent to the CPU.
In conclusion the major benefit of ARM and RISC-V really has very little to do with the ISA itself, but their more open nature allows manufacturers to build products that AMD and Intel can’t or don’t. CISC-V would be just as exciting.
Wrong. Unified memory (UMA) is not an Apple marketing term, it’s a description of a computer architecture that has been in use since at least the 1970’s. For example, game consoles have always used UMA.
Apologies, my google-fu seems to have failed me. Search results are filled with only apple-related results, but I was now able to find stuff from well before. Though nothing older than the 1990s.
While iGPUs have existed for PCs for a long time, they did not use a unified memory architecture.
Do you have an example, because every single one I look up has at least optional UMA support. The reserved RAM was a thing but it wasn’t the entire memory of the GPU instead being reserved for the framebuffer. AFAIK iGPUs have always shared memory like they do today.
It has everything to do with soldering the RAM. One of the reason iGPUs sucked, other than not using UMA, is that GPUs performance is almost limited by memory bandwidth. Compared to VRAM, standard system RAM has much, much less bandwidth causing iGPUs to be slow.
I don’t disagree, I think we were talking past each other here.
LPCAMM is a very recent innovation. Engineering samples weren’t available until late last year and the first products will only hit the market later this year. Maybe this will allow for Macs with user-upgradable RAM in the future.
Here’s a link to buy some from Dell: https://www.dell.com/en-us/shop/dell-camm-memory-upgrade-128-gb-ddr5-3600-mt-s-not-interchangeable-with-sodimm/apd/370-ahfr/memory. Here’s the laptop it ships in: https://www.dell.com/en-au/shop/workstations/precision-7670-workstation/spd/precision-16-7670-laptop. Available since late 2022.
What use is high bandwidth memory if it’s a discrete memory pool with only a super slow PCIe bus to access it?
Discrete VRAM is only really useful for gaming, where you can upload all the assets to VRAM in advance and data practically only flows from CPU to GPU and very little in the opposite direction. Games don’t matter to the majority of users. GPGPU is much more interesting to the general public.
gestures broadly at every current use of dedicated GPUs. Most of the newfangled AI stuff runs on Nvidia DGX servers, which use dedicated GPUs. Games are a big enough industry for dGPUs to exist in the first place.
“unified memory” is an Apple marketing term for what everyone’s been doing for well over a decade. Every single integrated GPU in existence shares memory between the CPU and GPU; that’s how they work. It has nothing to do with soldering the RAM.
You’re right about the bandwidth though, current socketed RAM standards have severe bandwidth limitations which directly limit the performance of integrated GPUs. This again has little to do with being socketed though: LPCAMM supports up to 9.6GT/s, considerably faster than what ships with the latest macs.
This is why user-replaceable RAM and discrete GPUs are going to die out. The overhead and latency of copying all that data back and forth over the relatively slow PCIe bus is just not worth it.
The only way discrete GPUs can possibly be outcompeted is if DDR starts competing with GDDR and/or HBM in terms of bandwidth, and there’s zero indication of that ever happening. Apple needs to puts a whole 128GB of LPDDR in their system to be comparable (in bandwidth) to literally 10 year old dedicated GPUs - the 780ti had over 300GB/s of memory bandwidth with a measly 3GB of capacity. DDR is simply not a good choice GPUs.
That’s kinda true, in a sense that all batteries use a chemical reaction to generate electricity and a damaged battery can short and thus ignite arbitrarily. But there’s lithium-based batteries like LiFePo₄ that burn significantly less intensely if at all; and there’s lab-only chemistries that are non-flammable. So it’s not really because of the lithium specifically that they burn so well.
TLDs are valid in emails, as are IP V6 addresses, so checking for a .
is technically not correct. For example a@b
and a@[IPv6:2001:db8::1]
are both valid email addresses.
The only one I’ve seen is the VW “e-up!”.
There’s vulnerabilities like the recent iMessage exploit that are executed remotely through no interaction by the user. In combination with the ability to self-spread you get mass exploits like WannaCry which spread to 300k+ computers in 7 hours. All you need is a network connection.
So you push digital goods to a robust public platform like IPFS and tie decryption to a signed, non-revokable, rights token that you own on a block chain.
What you describe is fundamentally impossible. In order to decrypt something you need a decryption key. Put that on the blockchain and anyone can decrypt it.
Even if you can, pirates would only need to buy a single decryption key and suddenly your movie might as well be freely available to download. Pirates never pay hosting fees because it’s using the same infrastructure as customers and they can’t be taken down because they’re indistinguishable from customers.
Thanks for the detailed reply. You saying that “They themselves claim that they don’t spend more than €5 per phone on fair trade or environmental stuff” is a complete lie. It’s not a number they’re claiming, it’s a number you’ve estimated. And lets be clear: what you’ve done is take $3k in gold credits plus $13k cobalt credits and multiplied that by an arbitrary 8x.
I think you’ve gone into your analysis with a foregone conclusion. There simply isn’t enough information to say anything about the cost overheat of being “fair”.
And yet the FP4 was significantly less recycled. Plastic is certainly not cheaper to recycle; that’s a lie the plastic industry’s been pushing for a while.