Nov 25, 2017 - Mesh Commander is a Intel AMT tools that is fully web based. Presentation and screen shots. Now, all you have to do is copy the update into an Updates folder found in the. By Apple Inc. For its macOS and iOS operating systems, and also available. Here are some tips to fix the error (4014 and 4013). Using a little information from the (now dead) Dave's Tech Blog, you can adjust the. The port number for VNC Server to use can be set to any other available port. The problem was, that you needed to sift through intel's AMT SDK and find the. By doing so, your Mac starts listing on port 5900 and you could access it via.
- Intel Mesh Commander (amt Tool): Now Available For Mac Download
- Intel Mesh Commander (amt Tool): Now Available For Mac Free
Intel remote management with support for using the wireless card is something that got me quite terrified when I first tried on my T420. Basically, no recent intel laptop can ever be secured, unless you physically remove the wireless and wired network card. Intel has a jolly video demoing how to pwn a machine remotely (framed in the positive light of taking control and fixing a boot problem by the it service desk): This is for 6th generation+ cpus, but the systems in older cpus are also quite powerful.
And can't really be protected beyond using a password/passphrase. I'm not sure (and probably no-one knows) if there's also a golden key/backdoor. (I wonder if the video is actually narrated by Zach Woods (Donald 'Jared' Dunn from Silicon Valley) - or if it just sounds a bit like that). At any rate, if watching that jolly video doesn't fill you with fear, I'd say you're not sufficiently paranoid.
Enabling it isn't generally easier either, and it can be secured with certificates on both side. Every time this topic is brought up I'm surprised by the number of people who think remote access is bad and cannot possibly be secured, while not actually digging into what it would take to compromise such a machine. Maybe it's the server admin in me, but OOB BIOS-level remote access and management for my systems is a godsend and my biggest issue with it is that they tend not to include those features in their 'enthusiast' chipsets. The security expert in me is terrified at the concept of remote access that, far too often, isn't actually managed very well.
I've been to far too many places that have management interfaces like this and either don't know it, or don't apply updates. It's not that remote access is bad, it's that all the remote access I've seen has been horrible.
Hash disclosure is just seen as 'meh, that's the way it is', and being able to login without a password is standard unless the IT has taken the time to update something they often don't even know about. How do you know the fab you send your design to doesn't implement a backdoor? It really wouldn't be that hard to compare what you received from the foundry with the masks you sent them. There are services that decap ICs and attempt to reverse engineer them. Here's the first hit I found on Google: You wouldn't need the reverse engineering. All you would need to do is to compare that all the metal layers (and perhaps poly) match what you sent.
That's an almost trivial comparison. The hard part of reverse engineering is in figuring out what the circuitry actually does, and you already know that, since it's your chip! It could be possible for a fab to alter diffusion layers to change the functionality of a chip. That would be harder to detect by services such as the one I mentioned. But it would be very hard, very time consuming, to attempt to hack in a backdoor by only messing with base layers, rather than messing with metal or with poly (where changes are easily observed).
If there were to be a backdoor anywhere in a design, it would be in IP you used on your chip, that you got from either your foundry or from a 3rd party IP supplier. It would be easy enough to hide all kinds of stuff there. It's not just Intel you're trusting, it's the OEMs, who also have an EXTREMELY poor record. Just take a look at the ENORMOUS number up BMC and auto-update vulnerabilities.
They really just either couldn't care less, or are deliberately making machines vulnerable. Things are being pushed into IME, that aren't optional, often aren't wanted by users (or snuck in there so they wouldn't know), don't all have to be there, can't be verifiably disabled or overriden by end users. Even if it weren't backdoored (unlikely), it presents an ENORMOUS attack surface at the very worst possible privilege level.
Open source designs that are at least inspectable by researchers would be a start. But more importantly - allowing users to CHOOSE whether they want to override software (it IS after all software and not hardware). Why shouldn't we be permitted to run Libreboot etc on modern hardware and know that when we turn a machine off, that it's off, without unplugging network, power cables, batteries, etc? None, but you are taking away from it things that the article doesn't say. AMT is a subset of ME, it's not available on all procs, and even if it is available, it also requires the supporting chipset. Nearly everyone reading this on a laptop/desktop system is doing so on an Intel processor.
Go ahead and attempt to enable remote access without a supported proc/chipset/bios and then come back and complain about this unexpected remote access. There is a lot of misunderstanding about this technology in the security world which strikes me odd because it's well-documented and readily available for your own testing. But as I mentioned, I was rather terrified when I tried this on my thinkpad laptop. I'm sure many other business laptops have similar facilities (and yes, of course it is a feature to be able to manage a large fleet of machines, both work stations and servers). As for trust, first, what we generally know is that any complex system is likely to have flaws. So any remote access system is likely a back door - intentionally or not.
A physical off switch would be prudent (arguably switching off wireless network card and not connecting an ethernet cable is one option: but then your options are: no networking or possible vulnerability). It's not as if any networked machine is 'perfectly secure' - but it'd be nice to be able to set the scope of exposure even more easily, and granularly. Second, one thing is trusting Intel to not intentionally install a back-door - keeping in mind that includes any 'golden key' for updates.
And not leaving any holes in the interface they expose (ie:getting the feature right). That in itself is a pretty high bar, imnho. But another thing is trusting Intel not to build in secret backdoors - that is; if they say, no, there's no IME in product X, then trust them on that. That is to me a much more trivial thing to trust. I know there will be bugs in the software and hardware, but I'm not all that afraid of maliciousness.
If I was working for a Chinese intelligence service the threat model might have been different. In sum, given that there's bound to be bugs, and might be well-intentioned software upgrade support - I would prefer if it was easier to buy high-end parts without stuff like IME (this also applies to AMD btw) - or at least that it was easy to remove a jumper or toggle a dip switch to turn the stuff completely off. ed: finally re: server Vs laptop: there are some things that doesn't matter as much if it's exposed on a server: got my encrypted backups? Encrypted email?
Got my secret key from my laptop? That is a problem. TL;DR: Build yourself a desktop with a non-vPro CPU and a non-Intel NIC. Check 0 for a list of such CPUs. The usual recommendation is to either get a pre-2009 AMD machine (which is what my present desktop is) or get a Sparc machine.
My boss won't buy me a desktop with an UltraSparc CPU, so I won't bother with that. AMD has substantially less documentation than Intel, AFAICT, so Intel products deserve investigation for their relative starkness. An albeit outdate list of systems to avoid, see 1. I will assume that you are trying to build a desktop from off-the-shelf parts. For sure, get a CPU without vPro, since that downgrades the type of AMT/IME feature set to Standard Manageability: 'Please note that NON -vPro™ Intel® desktop procesors will make Intel® ME FW to switch its features set from full Intel® Active Management Technology to Intel® Standard Manageability that do not suport Intel® AMT KVM Redirection feature (it is disabled internally in the Intel® ME FW).' 2 When the CPU doesn't support AMT, then Standard Manageability is what is running in the background.
This says 2 things: (1) non-vPro desktop CPUs downgrade AMT to Standard Manageability and (2) IME is active regardless of the CPU's feature set. What is Intel Standard Manageability? 'Q8: What is Intel® Standard Manageability and can it run on a non-Intel® vPro™ technology-based CPU? A8: Some basic management capabilities are available on non-Intel® vPro™ technology-eligible Intel® Core™2 processors as well as Intel® Pentium® dual-core and Intel® Celeron® processor-based CPUs. Intel® Standard Manageability is available only on desktop systems right now (not notebook), and only includes basic capabilities such as hardware and software inventory and remote diagnostics.' 3 I can't tell if Standard Manageability is a lite version of AMT. Intel's documentation on it 4 isn't a whole lot of help.
Intel mentions that Atom and i3 platforms generally do not support AMT. 5 At the bottom of this 6 page lists those Intel chipsets with IME. I'd consider that to be a list of chipsets to avoid, but those cover almost all modern chipsets, AFAIK. For sure, don't use any Intel NICs: 'Adding another NIC will not nullify Intel® vPro™ technology verification, but Intel® Active Management Technology communicates only through the onboard network interface of Intel vPro technology, and it is strongly recommended that an additional wired NIC is not added to the platform as this might cause some of the features of Intel AMT to not operate as expected.'
7 0 = 1 = 2 = 3 = 4 = 5 = 6 = 7 =. I think we're talking about two different things. BMCs run separately from the rest of the system and are active even when the machine is halted. They provide remote mouse/keyboard/monitor, power control, remote device mounting, bios settings, hardware sensors and monitoring, etc. These have been in servers for ages, do the same thing as IME, and are way less secure. None of it has any relevance to software updaters or software at all really since a lot of these boxes are shipped without an OS or even hard drives. My first thought was that it seems increasingly clear that Stallman has been right all along.
The problem is that being philosophically right doesn't always mean being practically right. In order to create the perfect Stallman-esque machine, one would have to design everything from the logic chips up from scratch, because in the end, no third party can be trusted.
He says this himself about the Loongson system he uses daily; he considers it a compromise but one heavily weighted in his favor. In short, Stallman has been right all along, but there's little we can do about it from a practical standpoint. Cherry-picked from that thread. I just have to point out this awful slippery-slope argument. I don't see how getting rid of the ME helps. If you don't trust Intel then you don't trust Intel.
They can backdoor the main CPU as easily as they can backdoor the ME. The only true solution is an Open Source Hardware CPU, and some means of verifying that the hardware matches the HDL. There are already Open Source CPU designs, but the verification is more difficult.
Even if you use a big enough process node that you can decap it and inspect it optically (eg. Using similar techniques as used with ), there is still the possibility of dopant-level backdoors which are much harder to detect. I don't see how getting rid of the ME helps. If you don't trust Intel then you don't trust Intel. As dandelionlover says, security is not boolean. I may or I may not trust Intel to backdoor the processor I run code of my choice on in more or less visible ways (the more visible, the more risk of public opinion backlash if it is revealed what Intel did), but I absolutely do not trust Intel and its business partners to design an independent subsystem that runs their code and is not vulnerable to Intel (or any rogue employee thereof) + Intel's business partners (or any rogue employee thereof) + US federal entities (or any rogue employee thereof) that can coerce any large or small US company.
I am not suspecting Intel of ill intent. I am suspecting Intel and its partners of incompetence. There is a big difference.
Intel can not actually “backdoor the main CPU as easily as” they can leave the ME open to attacks. The second one is so easy that you might as well assume it is the case until convincing arguments have shown otherwise. Trusting Intel on producing a CPU, and trusting Intel with ME are completely different levels of trust. If I could run my own software on the Intel ME, I wouldn't mind. In some sense having Intel ME is worse than having Windows as your OS: at least you can apply security patches for Windows, and should you choose to do so you can replace it with your favourite OS.
However Intel ME is entirely closed source and locked down: probably running out-of-date software with several unpatched vulnerabilities with no way for the user to inspect it, patch it, turn it off with a jumper or replace it. Well, having a KVM that one can use to remotely connect to their machines is a good reason (if we leave the statement at this level and assume that everything is done in the best interests of the end-users, by their informed demand). For example, having IPMI with serial console access or a full-fledged VNC on HP iLO on servers proved to be really useful on different occasions, where I was able to fix the boot-level problems remotely rather than having to drive for an hour to visit the server. Remote access is of less importance with desktops - and even less with laptops - as in most cases user is physically present, but still I had a few cases where I regretted not having a way to remotely connect to my machine at home (it had failed after a power outage), while I was on the road.
I'd say, the very core idea isn't really bad. Intel's implementation is terrible. Labview 2009 download.
The whole point of a independent hardware KVM is that it's autonomous - no matter which state the software (and unrelated hardware) is in, you can still access the system and try to recover. But, sure, I fully agree about the need of such system being an open platform that user can audit, update and control - rather than a black box. (Well, if there is a platform at all - a simplest external IP KVM could be a very dumb hardware box with the only software being an extremely primitive TCP/IP stack.) Added: Anecdote: In early 2000s, some company I know of, had quite simplest remote access tools - DIY remote reboot devices made of externally-powered Realtek network cards.
Sending a WoL packet to a known MAC (on a well-isolated network) would forcibly power-cycle the connected machine or router. No amount of fancy software solutions would bring a well-hung host back online.;). The whole point of a independent hardware KVM is that it's autonomous Which Intel ME isn't any more than a hypervisor is. It uses the same CPU and network interface and can be misconfigured in the same ways a hypervisor can. It essentially is a hypervisor but one that you can't remove or replace.
There are advantages to really independent hardware but firmware doesn't get you those, and there are advantages to management software but none of them require the software to be burned into the silicon. I agree that the feature set can be useful, it just doesn't belong where they put it (and in that sense we may only be arguing past each other). Yes, but BMCs do more than just power machines on and off, and are used for more than powering machines on and off. Automatic hardware provisioning fully depends on BMCs. Intel is the party in control, and Dell, and Supermicro, and HP,. IPMI is the API you're looking for, you can always just shut off the networking.
But again, it's still done through the BMC. SOMETHING has to do it, and that something needs access independent of the OS and the rest of the system due to the nature of the settings you can change. Also how is IME any different than providing an API? The api has to actually do something, how exactly do you propose to do that without something like IME? Yes, but BMCs do more than just power machines on and off, and are used for more than powering machines on and off. Turning on a powered-off machine is the only thing that couldn't be done by a hypervisor.
Once the machine is turned on, a hypervisor should be able to do anything IME does by doing it the same way IME does it, as long as the hardware is sufficiently documented. The part of IME that is unnecessary is the software, because it's software in lieu of transparency and documentation. A separate management processor that you could install your own software on would not be a problem, and then you could even turn on the machine.
A hypervisor isn't going to be able to interact with Option ROMs, change C states, change the CPU's vt-x/vt-d parameters, configure the HBA, change device addressing, configure port signaling speed, etc. That could all be implemented, but it would require another hardware controller or some sort of api that has lower level hardware access than the OS does, and something to actually provide that access and communicate with the various components, at which point you just invented IME/BMCs. Edit: The part of IME that is unnecessary is the software, because it's software in lieu of transparency and documentation. A separate management processor that you could install your own software on would not be a problem, and then you could even turn on the machine. I suppose I can agree with that, and that's basically what most BMCs are, though installing your own software is hit or miss.
(I've read about someone that got a stripped down linux kernel+userspace running directly on a SM BMC once), Intel just moved it into the cpu. IME just seems like a strange hill to die on when it's probably more secure than 90% of the remote management setups in use today, yet everyone seems perfectly ok with those. That could all be implemented, but it would require another hardware controller or some sort of api that has lower level hardware access than the OS does, and something to actually provide that access and communicate with the various components, at which point you just invented IME/BMCs. What you've really invented is hardware drivers, which have been at home in the OS forever. IME just seems like a strange hill to die on when it's probably more secure than 90% of the remote management setups in use today.
The problem is they put it everywhere instead of only where people asked for it. You can get away with more on something which is opt in because the people who don't like it can get the one without it. So now it doesn't have to be as secure as the other remote management setups, it has to be as secure as not having remote management. What you've really invented is hardware drivers, which have been at home in the OS forever. Those drivers need to actually communicate with something though, that something is exactly what BMCs are. If the hardware physically providing communication with those systems isn't there, no amount of drivers will help.
Additionally, many of those things have to be configured before the OS is loaded and the hardware they control is fully initialized. Not possible inside the OS. The problem is they put it everywhere instead of only where people asked for it. You can get away with more on something which is opt in because the people who don't like it can get the one without it.
Intel Mesh Commander (amt Tool): Now Available For Mac Download
So now it doesn't have to be as secure as the other remote management setups, it has to be as secure as not having remote management. Yeah, I guess that's an aspect I didn't think of. I can see why people wouldn't be happy running remote management on their consumer hardware, I guess my view is biased by the fact that on systems I frequently work with, having a BMC is a given. My only experience with IME in consumer-level hardware was a neutered version of it where remote access wasn't an option, just some local hardware monitoring/management that seemed mostly useless, but I guess that must've changed.
Those drivers need to actually communicate with something though, that something is exactly what BMCs are. If the hardware physically providing communication with those systems isn't there, no amount of drivers will help. Additionally, many of those things have to be configured before the OS is loaded and the hardware they control is fully initialized. Not possible inside the OS. You're going to have a chicken and egg situation at boot. The SATA/SAS HBA needs some code to read the OS with, but in a sense this is just a piece of your OS that has to be installed on the HBA, in the same way as grub has to be installed in the boot partition.
Once the OS is running it can load different code into the HBA or replace the code that will run on the next boot. We can argue about whether to call that sort of code a BMC or not but the more important question is if people can replace it or not, so that people can fix things the manufacturer doesn't. For Intel, it's a enterprise marketing bullet, for the rest of us it's an opportunity to secure our systems the way we want, however that capacity has been denied us. The ideal security mechanism would provide us with a per cpu key from Intel, which we use to update our own user key to the cpu, and only user signed firmware is loaded. The exact mechanism of this can be handled in many ways. Right now, we locked out, and primed for being snooped on without our consent.
I like Intel's engineering, but everything else I could do without. And ARM is no better. I'd say worse actually, but I'm not going to defend that. Please, please, AMD do the 'right thing' with Zen. Allow us to be in command of our security, so we may delegate it to those we trust with the technical know how, be it OS vendors, or our IT departments, or our own selves. Semi-hijacking your question to mention the other side of the pond too, which many people do not seem to know about and I run into this issue quite regularly. While the laptop situation is indeed pretty bad (the X200 is probably as best as you can get here), for people looking for a modern workstation, AMD's current FX processor lineup is still free from this trash.
Intel Mesh Commander (amt Tool): Now Available For Mac Free
Past the 2013 designs (Family 16h 1) AMD includes their own equivalent of Intel's ME called PSP 2, so presumably the upcoming Zen is going to be heavily backdoored too. Now, you can still buy an 8-core/4 GHz+ AMD Vishera 3 generation (which is the 'current' FX), add a motherboard with ECC support (in a stark contrast to Intel, all AMD FX processors fully support ECC memory and for example ASUS sells a number of AMD motherboards with ECC support) and build yourself a workstation that will easily last for another decade, possibly longer (you can always buy a motherboard or two of your favourite model as a backup for the times when they finally disappear from the market). All FX processors are factory unlocked 4 and can be pushed much farther past their design specs with decent cooling, adding another bit to their possible usefulness over the long term. Some more (random) reading on the subject of avoiding AMD PSP and Intel ME: - 1 - 2 - 3 - 4. Depends what you mean by 'laptop'. According to 1, the fastest processor that shipped with an X200 is the Intel T9600. Checking Geekbench, today's top-of-the-line 64-bit ARM phablets seem to have significantly higher CPU performance (both single and multi-core), and an equal amount of RAM.
AFAIK, those devices don't have anything like Intel ME; they do typically have a hypervisor running above the Linux kernel, but it should be possible to replace it if the bootloader is unlocked. If someone comes out with an ARM64 Chromebook in the near future, that might be a good device to target for fully-free system efforts.
Prediction: over time, there will remain no commercially significant high-end chips without this sort of a 'security subsystem.' Recommending ARM, PowerPC or other architectures over x86 as the FSF suggests will not get you very far because the problem (or as chip vendors call it, the solution) is not in the ISA, it's in the chip, and every chip vendor making the sort of chip that can power a general-purpose computer will end up being this way.
Of course in practice, as long as say Linux runs on the machine, the existence of the ME or the like is almost inconsequential for the user, because you run the same software, and there's never a shortage of security vulnerabilities right there in the OS and the userspace software. In terms of the impact on the number of vulnerabilities, eliminating C and C would go way further than eliminating black box 'security' hardware, just because the huge amount of C and C code, much of it written hastily and committed the moment it 'runs on my machine', presents a much larger attack surface than the black box hardware + software system.
But of course the FSF will never recommend ditching C and C. In terms of the impact on the number of vulnerabilities, eliminating C and C would go way further than eliminating black box 'security' hardware, just because the huge amount of C and C code, much of it written hastily and committed the moment it 'runs on my machine', presents a much larger attack surface than the black box hardware + software system.
But of course the FSF will never recommend ditching C and C. This is missing the point you can and do have control over the C code which runs on your machine; the same cannot be said for ME. The comparison is entirely disingenuous.