

When it happens it’s a security flaw and it needs to get patched. It’s not normal everyday thing.


When it happens it’s a security flaw and it needs to get patched. It’s not normal everyday thing.


That’s what I mean, it shouldn’t be possible to relay anything. It should only trigger when there’s a reader physically in proximity to the phone.
Please keep in mind this is happening on the victim’s phone which is not rooted, the malware is a regular non-system app.
If it were happening on a rooted phone I could understand being able to subvert the NFC chain because at some point it has to pass from hardware to software and if you’re privileged enough you can cut in there. But the malware app is not privileged.


For those confused about how this could work with chip cards, the malware has two components, one installed on the victims phone and one on the attacker’s. The attacker initiates the contactless authentication at an ATM or contactless payment and their phone communicates in real time with the victim’s, which is tricked by the malware into reacting to that event and producing the one time token which is then relayed to the attacker and used.
They also previously social-engineered the card PIN from the victim, in case the contactless event requires it (definitely in case of ATM login).
The fact you can trick the NFC system on the phone into reacting to “phantom” payment events and intercept the resulting token sounds like a pretty big problem. The former should be entirely hardware controlled, and the latter should not allow the token to go anywhere else except to the hardware.


Ironically, if Graphene would succeed, it would lead to a system that’s every bit as locked down as a manufacturer’s Android. GrapheneOS would also not allow you to have root etc.
IMO Graphene wants a place at the big player table. They’re not in it for user freedoms.


Wow, basically everything you wrote about Manjaro was wrong:
They just need to tunnel the data and let the client decrypt it. Basically what Proton does with their bridge app. And also basically what Tuta’s client does.


What OP is trying to do isn’t impossible it’s actually very interesting. There are lots of people who use tab workflows instead of bookmarks. And I think everybody would benefit from better in-browser search. Just because bookmarks is how it was done 30 years ago doesn’t mean we can’t try new things.


What is amazing to me is how some people will come out of the woodwork to tell a person when they think they’re using their browser “wrong”. Just let them be if you have nothing to contribute.


It used to be part of the CPU itself. Intels would throttle themselves down when reaching critical temperatures. Is that no longer the case?
Yeah that must be the reason. Not that most men don’t know how, won’t ask how, will get bored after two minutes of absent-mindedly licking the general area, or generally just won’t. 😄


Bandwidth is a finite resource. If everybody on your street wants that 10GB at the same time there’s going to be throttling.
But that’s a common sense type of throttling. Net neutrality is about not giving priority to certain types of content or websites over others.


Back when I did LFS I dealt with this by giving each package an /opt prefix, symlinking their respective bin/, sbin/, lib/, man/ and so on dirs under a common place, and adding those places to the relevant system integrations (PATH, /etc/ld.so.conf etc.)
I put together a bash script that could manage the sumlinks and pack/unpack tarballs, and also wrote metadata file and a configure/make “recipe” to each package dir. It worked surprisingly well.
A handful of packages turned out to be hardcoding system paths so they couldn’t be prefixed into /opt (without patching) but most things could.


It will not though. Explicit sync is not a magic solution, it’s just another way of syncing GPU work. Unlike implicit sync it needs to be implemented by every part of the graphical stack. Just because Nvidia is implementing it will not solve issues with compositors not having it, and graphical libraries not having it, and apps not supporting it, and so on and so forth. It’s a step in the right direction but it won’t fix everything overnight like some people think.
Also it’s silly that this piece mentions Wayland and Nvidia because (1) Wayland doesn’t implement sync of any kind, they probably meant to say “the Wayland stack” and (2) Nvidia is not the only driver that needs to implement explicit sync.
Perhaps we could suggest OP other things to try before we suggest they should rip out their GPU. I don’t know, basic problem-solving approach, like using the Nouveau or generic Vesa driver to rule out the proprietary Nvidia driver, or a different screen-sharing method to rule out RDP. Which is a proprietary Windows protocol so it may not work perfectly from Linux and with an unusual hardware configuration.
rsync -avxH will copy the files between drives.
You can simply mount the new drive in the same place as the old one after the copy, that way you don’t have to change any paths.
The real dependency problem is that when an AUR package updates and Manjaro’s packages are not new enough for the update, it will cause breakage.
How many AUR packages do you use? I have about 70 installed right now. Never had a source-level incompatibility happen. You’d have to let system updates lapse for years to lose source compatibility with a current AUR package.
Nobody’s perfect, all Linux distros out there have had a rough start. The ones that endure and stick around are the ones that eventually improve. If you were around when Arch came out you may recall very similar attitudes from fans of other entrenched distros disparaging their efforts. Arch wasn’t born perfect either, they made plenty of mistakes in their early days.
But if you’d demand perfection all the time you’d never use the vast majority of distributions that are trying something new. We need to rise above partisan and petty differences because Linux is a hotbed of innovation and freedom and we as a community need to encourage and nurture trying new things, not dump on it.
This is most importantly true in terms of delayed security updates.
Security updates aren’t delayed in Manjaro, they’re pushed through out of band.
You also don’t understand how the AUR works in conjunction with outdated Manjaro packages, which will cause dependency problems and lead to breakage.
Once you’ve compiled an AUR package it will remain compatible with the system you compiled it on until you update and introduce an incompatibility.
This is true for any Arch or Arch-based distribution. It has nothing to do with when the distro updates packages. It’s purely a coincidental factor of whether a particular AUR package breaks binary compatibility with any particular distro update. Users who don’t regularly update their AUR packages to keep them in sync with the system will seemingly randomly experience breaks, depending on what AUR packages they use. It can and does happen on Arch just as well as any derivate distro. You need to either automate AUR updates or update them by hand to avoid it.
you can read what Arch’s security team thinks about Manjaro here
That’s not the “Arch’s security team”, it’s one person on a 3rd party forum, with a history of issuing personal statements reeking of personal grudge. Yeah I know that comment unfortunately. It’s a singular, isolated piece of flamebait and it makes me sad to see it’s still being bookmarked and passed around 5 years later.
“Countless” mistakes meaning two which were easily fixed.
There’s nothing wrong with Manjaro, in fact it’s probably the most user-friendly Arch distro. I’ve been using it for years and I chose it after trying several various distros and this was the one where everything worked out of the box: graphics, audio, peripherals (including controllers and exotic mice), and of course Steam and gaming.
They package drivers and stable kernels out of the box. They provide an easy to use tool for switching and installing drivers and kernels. They attempt to add extra stability to the distro (not all of us like or need to stay on the very bleeding edge all the time). Delaying the packages has zero relevance for AUR and anybody who believes otherwise should probably stop using AUR because it’s obvious they don’t understand how it works.
People who keep on linking those outdated hate lists about it are actively doing themselves and everybody else a disservice. Promoting hate against an Arch derivative for no good reason will not help Arch’s cause, on the contrary, it makes newcomers to shy away from the whole can of worms and drives them to Ubuntu.
The unfortunate reality is that some people will buy anything that expires, on the remote chance someone might be interested. If they’re set on doing that there’s nothing you can do, they will grab it and block it for at least one more year.
IMHO the best thing you can do is nothing. I mean nothing beyond discreetly checking the domain state in whois. Don’t inquire explicitly about the domain. Don’t use the WHOIS form on websites you don’t trust to exploit such queries into grabbing domains themselves.
You can use
whoisfrom the command line (best way). Alternatively, the TLD registry will have a WHOIS form on their official website.If you don’t generate any apparent interest they will eventually let the domain lapse. Check back a year from now.