November 15, 2016
In a way the ABCs are non-negotiable. D is probably the hardest, but most important:
D. Make sure edge devices have good passwords.
This can be done by demanding first configuration password changes... Good for pilots, but unscalable in interesting large deploys. The other way is to set a strong and unchangeable password... which cannot by any means be deduced from the network. Generating a password from a network interface MAC address is close to this... except when MAC addresses leak from LAN to WAN - like when MAC are used for generating other defaults. Keeping manufacturer "cloud" databases of passwords leaves the end device a brick when the service expires. [NOTE this does not imply the owner of the device or the LAN on which it resides should not build and maintain a database of passwords - that is just their responsibility]. The key seems to be a unique long and strong local password, easy for human or machine entry, stored out of band, and only locally accessible. There are patentable and trade secret pathways therein... so let us leave it at that *grin*. Much of the above deserves to be credited to James Lyne at a Xively Xperience 2015.
Two:
VPNs are still not easy to setup. And within a VPN their are no access controls. Modern IoT use cases of remote service beg for a way to keep the setup interfaces, for example of a motor controller and its allied pressing machine controller, available only to select external parties, while they freely communicate with each other directly in the LAN. This speaks to access control lists - ACLs. Modern microservice architectures are forging a pathway here where even "interprocess" communication needs and has ACL-like constructs. True ACL-likes are heavy weight and hard to configure... and add more pain to IT setup.
Can ACL-like setup, for by-port access tokens, be automatically templatized and deployed automatically? Software defined perimeter is an emrging standard with offerings from those like Cryptzone. Some like Illumio for VMs and containers have a method which might transfer well from IT to OT.
Three:
Testing, tracking and logging. As well as one might develop edge and core and network and database strategies, they can be pried by inquiring minds, or they can be broken, by accident or misuse. One needs to do testing (like API testing by Smartbear, for development and production - DevOps)... but even more important, one needs continuous monitoring, so that one can learn each time something new arises -and something new will arise - one can find ways around passwords and network access controls.
At minimum on LAN and WAN, one should try watching transactions with Nagios. Wireshark tends to not run all the time, and MRTG is somewhat limited. And if one goes further there are tools for various types of tracking, like those from Genians and PRTG. I will go further than tools and suggest one needs to consult with someone who has experience in real world deployments, in the wild, large deploys and monitoring thereof. Like Cimetrics has with Analytikafor automation systems.
Comments will be approved before showing up.
December 18, 2024
2024 in retrospect: Lessons learned and cyber strategies shaping future of critical infrastructure, Office of the National Cyber Director Publishes the Playbook for Strengthening Cybersecurity in Federal Grant Programs for Critical Infrastructure, The best Cybersecurity advice in 2024 and more...
November 26, 2024
October 30, 2024