This is a very verbose article that does not really engage with the actual question in the title. I still do not understand why apple (and in general big tech it seems?) reroute users to external forensic services instead of doing the forensics themselves. This only thing the article says about it is "they do not want/it is better this way".
Apple does extensive forensics itself, and spends more on platform and hardware security than all but perhaps? two other companies in the world. By referring people to nonprofit labs, Apple is encouraging information about spyware systems --- which target multiple platforms --- to be widely disseminated, and allows the OSINT process, which Apple has no comparative advantage in running, to work out targeting and sourcing.
The norm in high-end platform security practices at companies is to keep cards as close to the vest as possible, revealing information largely in the form of feature-style announcements of security improvements. The norm Apple is setting here is better.
Sorry, I'm dumb. The value here is engaging this NGO to attempt to figure out who is being targeted and why; go public with that; and that's better not coming from Apple?
The value here is that the NGO would then have information about hacking attempts against victims on Android, iOS, Windows, ChromeOS, and macOS, and could put that combined information to more use than if it were limited to just iOS and macOS users.
The engineering requirements to build a branch of a company with the security needs of specialized SOC would be crazy endeavor and to the point that whatever comes of it would probably look as foreign to the main company's function than what currently exists. Having a separate non-profit agency handle these cases would not only allow for the agency's incentives to align, but also the overall security design approach to its facilities and internal networks. It's a win-win for both private and public interest imo.
It's possible that the vendor has targeted the user as the result of a court order. (Not necessarily by installing a software implant on the device itself, but I suspect often these things are not necessarily as transparent as they should be even if done server-side.) This means there is potential of a conflict of interest here, and deferring to a third party solves that.
If you are targeted, presumably, they will be attacking you from every angle they can. If you have other tech besides Apple's, then that's just another attack vector. If Apple did this, they'd be Apple focused leaving many vulns. By this group being not Apple specific, they can be up on all the ways like IoT devices, Android, etc.
Based on that, it seems smart for each of the tech players to support the 3rd party solutions
Consider the liabilities, expectations, and political issues if Apple was doing such support work in-house. For the last - Apple has large, profitable operations in a fair number of the countries which might be targeting certain dissidents and journalists. Mr. Cook does not want any "you are aiding our worst enemies..." pushback when he is negotiating with the governments of those places.
> All the experts TechCrunch spoke with strongly recommend turning on Lockdown Mode if you think you may be a target, especially if you are a journalist, human rights defender, or dissident.
What does Lockdown Mode do?
> When someone enables Lockdown Mode, some Apple apps and services work differently. For example, most attachments and link previews are blocked on iMessage, FaceTime calls from unknown contacts are filtered, location information is removed from shared pictures and certain fonts on websites are prevented from loading.
> While it may look like Apple is abdicating its responsibility to protect its users
After reading the whole thing it still looks like that. After doing all that work to identify who’s been targeted with mercenary spyware why would you want to go to someone else to have them try their best at finding who it was?
Agree the article doesn’t answer the question at all.
My guess is expertise and scale; if Apple finds five such users a day, that doesn’t support a staff of 100 forensic experts. But if Apple, Facebook, Google, etc, etc, find 500 users a day, that might.
According to this[1] it looks like Apple is more or less directly funding the helpline, so it's more like Apple is outsourcing the work.
Note that I don't think the 2024 table is complete yet. Compared to 2023 there's hardly anything on it. In 2023 you can see that Meta was specifically funding the helpline. Lots of other major tech companies show up too.
Though mostly the charity seems to run on European taxpayer money (~60% of its funding).
Looks like Apple has been funding them for at least the past 3 years so this may even be fair.
If you're really targeted, "they" will also go for your non Apple devices like your "smart" TV. So you could probably use support to harden all your tech stuff.
Apple’s game is deep vertical integration. That’s all fine and dandy for selling products, but helps not one bit for customers who aren’t all-in on Apple’s ecosystem. A third party who only does this sort of work, in a platform agnostic way, is the sane way to handle things.
Right. If there is training or some other non-Apple/vendor agnostic stuff that has to go on (VPN) or device management they probably don’t want to get involved.
I don't see how you can come to that conclusion. Apple has obviously gone to significant effort to detect and notify in these situations, and they actually provide helpful information - pointing victims to an organization which can actually help them. Abdicating responsibility would be to do none of these.
>After doing all that work to identify who’s been targeted with mercenary spyware why would you want to go to someone else to have them try their best at finding who it was?
Thus is a great question and Apple's actions here seem to undermine many of the privacy arguments they've made against Right To Repair.
It seems sensible to send people to the appropriate avenue for support, they have no idea how someone has been compromised and by taking ownership it becomes their problem.
If someone breaks into your house do you call the builder for help?
>If someone breaks into your house do you call the builder for help?
if the builder of my house also charged me on a regular basis cloud storage fees, and made a cut of the fees when I used my credit card, and routed all my messages through the builder's server and contacted me and said "we've detected people trying to break into your house from your messages, but we aren't going to help you any more than this" I would sure wish I had some power to change my relationship with the builder.
Most of the roles in this house break-in analogy are Apple. They control the hardware, the OS, the app store, etc. They sell the devices & also sell support. This vertical integration in part defines their "it just works" marketing. It's perfectly reasonable for folks, including Harris' security team, to call Apple first, and mocking people for thinking Apple can/should field calls in this scenario is needlessly callous. (Yes, it's also reasonable for Apple to tell people to go to a trusted 3rd party, too)
There are weaknesses in everything around you. If apple were to be the owner of the issue then what’s to stop apple covering it up? It’s better for a third party to support, track and collate the issues and then for that knowledge to be addressed by Apple, or any other tech company.
Apparently quantum computing is around a decade away, when that happens I’ve been told you can say goodbye to all privacy at that point.
Even if you do go to the builder, what are they going to tell you other than “yup. Lock got busted: here, here, and here, you should probably call the <people who investigate break ins for a living>”?
I feel like the notification already does exactly that, no? “Hey, your phone looks busted, go call the <people who investigate sophisticated cyber surveillance targeting for a living>.”
And from the angle of “yeah you guys installed the locks wrong and they just opened the door, you should fix that”- does anyone reject the idea that the vuln would get reported to Apple, and that they’d be on it like crazy?
This is a very verbose article that does not really engage with the actual question in the title. I still do not understand why apple (and in general big tech it seems?) reroute users to external forensic services instead of doing the forensics themselves. This only thing the article says about it is "they do not want/it is better this way".
Apple does extensive forensics itself, and spends more on platform and hardware security than all but perhaps? two other companies in the world. By referring people to nonprofit labs, Apple is encouraging information about spyware systems --- which target multiple platforms --- to be widely disseminated, and allows the OSINT process, which Apple has no comparative advantage in running, to work out targeting and sourcing.
The norm in high-end platform security practices at companies is to keep cards as close to the vest as possible, revealing information largely in the form of feature-style announcements of security improvements. The norm Apple is setting here is better.
This is why I read more comments than articles.
Sorry, I'm dumb. The value here is engaging this NGO to attempt to figure out who is being targeted and why; go public with that; and that's better not coming from Apple?
The value here is that the NGO would then have information about hacking attempts against victims on Android, iOS, Windows, ChromeOS, and macOS, and could put that combined information to more use than if it were limited to just iOS and macOS users.
It said that AccessNow is really good at intake, and it wouldn’t be good for Apple to be in the business of inspecting so much personal data.
My guess is that if Apple did this themselves, it would be firewalled from most of Apple, so like AccessNow, so hopefully they help fund AccessNow.
The engineering requirements to build a branch of a company with the security needs of specialized SOC would be crazy endeavor and to the point that whatever comes of it would probably look as foreign to the main company's function than what currently exists. Having a separate non-profit agency handle these cases would not only allow for the agency's incentives to align, but also the overall security design approach to its facilities and internal networks. It's a win-win for both private and public interest imo.
With regards to US politics, there's campaign funding considerations as to how corporations assist candidates https://defendcampaigns.org/donors
It's possible that the vendor has targeted the user as the result of a court order. (Not necessarily by installing a software implant on the device itself, but I suspect often these things are not necessarily as transparent as they should be even if done server-side.) This means there is potential of a conflict of interest here, and deferring to a third party solves that.
If you are targeted, presumably, they will be attacking you from every angle they can. If you have other tech besides Apple's, then that's just another attack vector. If Apple did this, they'd be Apple focused leaving many vulns. By this group being not Apple specific, they can be up on all the ways like IoT devices, Android, etc.
Based on that, it seems smart for each of the tech players to support the 3rd party solutions
Consider the liabilities, expectations, and political issues if Apple was doing such support work in-house. For the last - Apple has large, profitable operations in a fair number of the countries which might be targeting certain dissidents and journalists. Mr. Cook does not want any "you are aiding our worst enemies..." pushback when he is negotiating with the governments of those places.
it's to maintain the legality of the arms-length ability to do jailbreaky like thinks when a 3 letter agency asks.
The big takeaway from this article:
> All the experts TechCrunch spoke with strongly recommend turning on Lockdown Mode if you think you may be a target, especially if you are a journalist, human rights defender, or dissident.
What does Lockdown Mode do?
> When someone enables Lockdown Mode, some Apple apps and services work differently. For example, most attachments and link previews are blocked on iMessage, FaceTime calls from unknown contacts are filtered, location information is removed from shared pictures and certain fonts on websites are prevented from loading.
https://techcrunch.com/2023/12/07/apple-says-it-is-not-aware...
IIRC, it also disables the Javascript jit compiler and configuration profiles.
> While it may look like Apple is abdicating its responsibility to protect its users
After reading the whole thing it still looks like that. After doing all that work to identify who’s been targeted with mercenary spyware why would you want to go to someone else to have them try their best at finding who it was?
Agree the article doesn’t answer the question at all.
My guess is expertise and scale; if Apple finds five such users a day, that doesn’t support a staff of 100 forensic experts. But if Apple, Facebook, Google, etc, etc, find 500 users a day, that might.
But who knows? The article sure doesn’t.
The real question is: does Apple donate to this nonprofit?
According to this[1] it looks like Apple is more or less directly funding the helpline, so it's more like Apple is outsourcing the work.
Note that I don't think the 2024 table is complete yet. Compared to 2023 there's hardly anything on it. In 2023 you can see that Meta was specifically funding the helpline. Lots of other major tech companies show up too.
Though mostly the charity seems to run on European taxpayer money (~60% of its funding).
[1] https://www.accessnow.org/financials/
Looks like Apple has been funding them for at least the past 3 years so this may even be fair.
If you're really targeted, "they" will also go for your non Apple devices like your "smart" TV. So you could probably use support to harden all your tech stuff.
That’s a good point.
Apple’s game is deep vertical integration. That’s all fine and dandy for selling products, but helps not one bit for customers who aren’t all-in on Apple’s ecosystem. A third party who only does this sort of work, in a platform agnostic way, is the sane way to handle things.
Right. If there is training or some other non-Apple/vendor agnostic stuff that has to go on (VPN) or device management they probably don’t want to get involved.
I don't see how you can come to that conclusion. Apple has obviously gone to significant effort to detect and notify in these situations, and they actually provide helpful information - pointing victims to an organization which can actually help them. Abdicating responsibility would be to do none of these.
I mean I’m certainly not the only one the person who wrote this feels that this is a reasonable viewpoint to have
>After doing all that work to identify who’s been targeted with mercenary spyware why would you want to go to someone else to have them try their best at finding who it was?
Thus is a great question and Apple's actions here seem to undermine many of the privacy arguments they've made against Right To Repair.
It seems sensible to send people to the appropriate avenue for support, they have no idea how someone has been compromised and by taking ownership it becomes their problem.
If someone breaks into your house do you call the builder for help?
>If someone breaks into your house do you call the builder for help?
if the builder of my house also charged me on a regular basis cloud storage fees, and made a cut of the fees when I used my credit card, and routed all my messages through the builder's server and contacted me and said "we've detected people trying to break into your house from your messages, but we aren't going to help you any more than this" I would sure wish I had some power to change my relationship with the builder.
Most of the roles in this house break-in analogy are Apple. They control the hardware, the OS, the app store, etc. They sell the devices & also sell support. This vertical integration in part defines their "it just works" marketing. It's perfectly reasonable for folks, including Harris' security team, to call Apple first, and mocking people for thinking Apple can/should field calls in this scenario is needlessly callous. (Yes, it's also reasonable for Apple to tell people to go to a trusted 3rd party, too)
> If someone breaks into your house do you call the builder for help?
If they constructed and installed the locks, and a weakness in the locks appears to have been the entry point for the break in, then yeah maybe?
There are weaknesses in everything around you. If apple were to be the owner of the issue then what’s to stop apple covering it up? It’s better for a third party to support, track and collate the issues and then for that knowledge to be addressed by Apple, or any other tech company.
Apparently quantum computing is around a decade away, when that happens I’ve been told you can say goodbye to all privacy at that point.
And the locks can be remotely patched to remove the weakness... seems fair.
Not the builder, but calling the lock maker for sure.
Damn builders used glass for my windows. Don't they know how easy that is to break?
no, you call the police and your insurance company. the insurance company might enquire with the builders on who's at fault. hope that helps.
Even if you do go to the builder, what are they going to tell you other than “yup. Lock got busted: here, here, and here, you should probably call the <people who investigate break ins for a living>”?
I feel like the notification already does exactly that, no? “Hey, your phone looks busted, go call the <people who investigate sophisticated cyber surveillance targeting for a living>.”
And from the angle of “yeah you guys installed the locks wrong and they just opened the door, you should fix that”- does anyone reject the idea that the vuln would get reported to Apple, and that they’d be on it like crazy?
[dead]
[dead]
[flagged]
You’re not shadow banned. You’ll probably get this comment flagged, though.
?