Wednesday, November 27, 2019
The New York Times has an excellent series of reports on topics related to digital privacy. I don't think technical people will be surprised by the technical issues described but some of the motivations of hackers might not be something we think about. And both the layman's description of the technical aspects and of the hacker motivations are elements that can be shared with friends, family and co-workers.
The message for us is that surveillance is far more common than most people would suspect and that being truly safe is extraordinarily difficult. The article puts to rest the idea that “I’m not important enough to be targeted.” Anyone who might be a conduit to a “high value target” must also be considered a possible target. Modern hacking tends to be a multi-step process by which the attacker uses lower level people to collect the information that will then be used to compromise the primary target. This is one of the reasons organizations need to consider security on an institutional basis and not based on the perceived vulnerability of specific individuals. A chain is only as strong as its weakest link. One last thing: don’t assume that the value of your site is the value it represents to hackers. Hackers may want to use your server to run a variety of criminal activities.
Tuesday, August 27, 2019
Ransomware is not new and it’s not a single thing. It’s a class of malware characterized by holding your data hostage until a “ransom” has been paid. Your data continues to reside on your drive but has been encrypted by the attacker. Your computer’s operating system continues to work, a ransom-note file is left for you to read about how to pay off the attacker, but all your personal files are unreadable as they have been encrypted. The decryption key, which will decrypt the files and make them readable again, is then sold to the victim, then the victim can then regain access to his files by running the decryption software. (In some cases, the criminals are really mean, take the money, and don’t even provide the decryption key.) Individuals may pay $50 to $500 dollars in ransom and institutions often pay hundreds of thousands of dollars. The risk from ransomware is so great that there is even business insurance policies that specifically cover it. Such insurance policies are now starting to come with clauses stating that the policyholder has to prove that they have have taken basic security precautions if they are to be compensated.
Ransomware typically gets on the victim’s computer via a malicious web link. (See diagram on the next page.) The link is usually in an email message, but it might come in via SMS or via some social media app. The key element is that the user believes they are clicking on something useful, such as information about an abuse of their bank account, but is actually downloading a malicious app. The app starts a process of encrypting all the data on the computer, then all the data the computer can reach on the local network and then it attempts to compromise other systems on the network. Once the encryption has been completed, the user often sees some sort of message demanding payment. In some cases, there is a time limit of a day or two. After that time period, the ransom increases or the decryption key will no longer be offered. The ransomers want to keep their targets in a constant state of panic.
Sometimes, it is possible to know that a system is being taken over. If the computer becomes very slow or you hear the hard drive constantly working, that might be a first sign that the drive is being encrypted. Second, data or applications may stop working. If you spot ransomware in the process of encrypting the drive, turn off the computer immediately and call for technical help.
As mentioned in the first section, the EU has a repository of tools for helping with encrypted systems. If you’re lucky, you’ll have one of the older ransomware applications. There is a 10-20 percent chance that your system can be decrypted without any payment. Prevention is the best strategy.
Keep in mind that some Ransomware is so poorly crafted that no decryption is possible, one poor version simply writes over your data with zeros. They will gladly take your money however.
Below are some steps that should be useful in reducing the chance that you or your institution will be a victim. Not all steps apply to all situations. An organization’s server would be addressed differently than would a remote worker’s laptop. Unfortunately, one of the corporate world’s most effective anti-ransomware strategies, that of educating staff via sending out faked social-engineering emails to see who can be fooled into clicking on suspect links, is not possible for our institutions. Those systems tend to be expensive and assume the ability to hold training sessions for the staff and remediation for staff found to be clicking on malicious links.
The steps below, when combined, are effective and inexpensive. Additionally, they greatly increase an organization’s overall security profile.
- Keep all software updated. Ransomware often takes advantage of known bugs to take over the target’s computer. A large percentage of successful attacks are due to unpatched software.
- Run your personal computer as a “standard user”. Malware uses the victim’s access to work. If the victim is a standard user on the computer, only their information will be encrypted. On the other hand, if the user is an administrator, everything on the system will be encrypted.
- Use antivirus software. Sophos UTM has a free home version that protects against ransomware. There are other AV vendors that sell effective solutions.
- Use a DNS service that filters out the bad websites. Ransomware often resides on known websites – usually in countries where the authorities are slow to take down malicious sites. A DNS service that knows the addresses of the bad sites can block a malicious link from reaching its control server when selected. Three such services are OpenDNS, CleanBrowsing and Quad9. Cloudflare and Google’s DNS currently do not filter DNS traffic.
- Some firewalls and antivirus applications have a feature called “geo blocking”. This means that traffic coming from parts of the world that are geo-blocked will be stopped. If there is no reason for your staff to visit Russian websites, then blocking traffic with addresses located in Russia will block ransomware hosted there. Geo blocking does not prevent email coming in from any of the blocked locations because your email server’s location is the important factor, not the email sender’s location.
- Having a good backup could be the only form of help for an encrypted drive. First, you may not be able to pay the ransom demand. Second, some ransomware attacks don’t have decryption keys. The creators may have moved on or they may have had no intention of ever sending a decryption key. The main problem with typical backups is that ransomware will encrypt them, too. Backups need to be done in such a way that an infected computer cannot encrypt the backups. For example, connect a backup device for a backup and then disconnect it after it has been completed. A Network Attached Storage (NAS) device used for backups can be also be encrypted by ransomware. If the NAS is only being used for backups, consider turning it on for the backup and then having it physically turned off the rest of the time.
- Data created and stored online typically cannot be encrypted by ransomware. For example, Google Docs cannot be encrypted. Likewise, Office 365 documents created online are immune. However, anything created on the user’s local PC can be encrypted. Files synchronized to an online system, such as Google Drive, Dropbox, or OneDrive, would not get flagged as bad. Uploading a virus would certainly get flagged but uploading a document encrypted by ransomware would be indistinguishable from a document encrypted by the user. The key element to keep in mind is not where the data is eventually located but where it was created. Local content is accessible to the ransomware application and online content is not.
- A good form of protection for an institution is to use a transparent proxy. Essentially, a transparent proxy intercepts Internet traffic as it comes and goes. It can scan all interactions. Some antivirus systems have this ability, but generally, this is a system that an office would have setup for it by an IT expert.
- Every institution should have email anti-spoofing settings in place on their DNS server. Those settings are known as SPF, DKIM and DMARC. These three standards are all based on DNS entries and allows email servers to know if an email message is from the organization that it claims to be from. The three standards work together. Given that attackers prefer to send social engineering emails from the target’s email address, anti-spoofing measures block emails from your institution’s domain name from coming in. We have a document on how to create these records.
- Attackers, knowing that spoofing email addresses are becoming more difficult, look to see if they can break into a real institutional account and then send a well-crafted email from it. Recipients see a believable message from someone they know. Perhaps it says “I’m a new grandfather! See the photos here” or “We have changed the retirement policy. Please click here to read and then sign the relevant forms”. Even the most knowledgeable people fall victim to such clever messages. When security organizations get compromised, it’s most often a clever spoofed email message that started the process. The best form of protection is two factor authentication to all online accounts. Using two factor authentication essentially eliminates the threat of the account being protected from being taken over – email or otherwise.
Thursday, August 08, 2019
End-of-life is more than just not receiving updates from Microsoft. Vendors stop making versions of their software for your operating system. You might be happy with Word 2003, but your browser may decide not to work any longer. In short, you don't want to be on an unsupported operating system.
Microsoft would like you to upgrade to the newest operating system. But what if you cannot? Old operating systems are generally found on equally old PCs. Old equipment may not be able to support newer operating systems. And, if they do, you may not have the disk space to install it or have the processing power to make the experience feel quick enough to be usable.
In comes Linux. Linux is not a single operating system. No one runs an operating system called "Linux". Linux is a core set of technologies that are based on UNIX. There many versions of Linux, called distros, on the market. Some are commercial, in that you pay for them, and most are free. Fortunately for end-of-life Windows users, there are a number of Linux distros designed specifically to ease the transition from Windows to Linux. One of the prime examples is Zorin Linux, but there are perhaps a dozen others. These distros offer interfaces that can be adjusted to look like specific versions of Windows. You want XP, done. You want Windows 7, fine. Windows 10, no problem. Because Linux tends to run well on older hardware, the transition will not be accompanied by reductions in speed. Some distros even come with utilities that will backup all your Windows data and then import that data into your new Linux distro.
If you are not sure which distro you like, or just nervous about Linux, you can run a distro from a USB device or dual install it with your current Windows installation -- assuming you have the free disk space.
So, before tossing our your end-of-life PC, give Linux a shot. You'll be happy it allows you to keep your old PC and may find that Linux is the operating system you will want on your next new computer.
Tuesday, August 06, 2019
Google just released yet another service with Artificial Intelligence (AI) running the show: Priority folders in Drive. Google has always been about searching for content; rather than, organizing content via hierarchical folder structures. Google is now adding AI to the process. Google’s back-end AI systems try to determine what it calculates to be the content you either would search for or should be aware of. In other words, it wants to suggest the content it believes you would most want. Over time, it will learn from your actions and will hone the process.
Priority is very close to the Quick View folder you would have seen on the mobile version of Google Drive. The Priority folder is really more of a view. You cannot move or create content to the Priority area. This is purely an area for Google to suggest content it estimates to be the most likely to be of use to you. Some of the elements it appears to consider when deciding what to present as priorities are…
- Recent work
- Work that is done one a regular basis, such as a quarterly report.
- A document that has been edited by someone working on the same document.
- A document that has a comment.
- A survey form in which a new response has been recorded.
Search: The New Information ParadigmThe fundamental purpose of any process of structuring content is not to organize it; rather, the purpose is to make it find-able. Many times with technology, the process is confused with the goal. The reason for keeping information is obvious: you cannot find what you don’t have. The reason for folders is to limit the search to the point that the amount of returned data is manageable. This is probably why paper file folders aren’t a foot thick or so narrow as to only accommodate a few pieces of paper. Having a file folder for each piece of paper or putting every paper into one huge folder defeats the organizational benefits of having a folder system.
The primary reason we used folders with paper documents is because it was not possible to search the text of the documents. Folders put content into manageable collections that a person could search by scanning the documents with their eyes. The move to digital format has completely changed the logic of finding information. Computers can search every document just as easily as they can search a folder. We would never consider reading every paper document to find the one we need, but computers can easily do so. Our physical limitations have effectively determined the structure of how we organized paper documents. Collections of documents could not be too granular (e.g., yesterday’s letter to joe) or not too broad (e.g., all letters from last year). Paradoxically, the result of too many folders and too few is the same. Computers allow us to redefine the organization of content. It’s important that we use the capabilities of computers to their best effect and not simply transfer our human-based filing cabinet organizational structures to the digital environment.
Because computers can scan millions of documents just as easily as they can a dozen, the need to organize content into manageable folders is no longer the same. This does not, however, mean that some sort of structure isn’t helpful. Structure, in the digital age, means both classification -- the equivalent of a paper folder system -- and process. How we organize the process by which information flows through your systems is another form of structure.
AI is extending the search paradigm by learning what users are doing and predicting what they either will search for or would have wanted to search for, if they only knew. In Star Trek, the crew was always asking the computer questions. Siri's great grandchild might do that for us, but I think it's also reasonable to expect computers to bring to our attention those things it "believes" we need.
Tuesday, November 13, 2018
What a Hardware Security Key isA U2F key generates one-time authentication codes, uniquely encrypts them so that only the legitimate site being logged into can read the code, and of course sends it. The user places the key into a USB port and when a code is required, lightly presses the gold circle.
Why it is importantHackers are aware that many high risk targets are frequently using various forms of two factor authentication to secure their accounts. For targets worth individualized attention, hackers will attempt to defeat two factor authentication systems by tricking the target into providing them with the code. The new U2F security key standard defeats this new form of attack by making a code that only the legitimate site can use. A hacker, with a fake site, will find the code sent to it to be useless for logging into the legitimate site and because the user never sees the code, they cannot read it to a “tech support” person. In other words, the key removes the greatest weakness in any security system -- user error.
U2F SetupA security key can be the easiest form of two factor authentication because the user may only have to tap the key once. There is no looking for SMS message, copying it, pasting it in or opening an authentication application. Even the newer smartphone prompt that ask the user if he or she is logging in and then the user presses “yes” or “no” requires the user to access the phone and unlock it. While much easier than other ways, the main difficulty with a security key is that one needs to keep it at hand when logging in.
The first decision one has to make is whether to be security-key only or to allow other forms of two factor authentication to be used too. Allowing other forms of two factor to be used in conjunction with security keys has the advantage of making it easier to authenticate on devices that either don’t have USB ports, such as phones, or where the USB ports are inaccessible. If one is often on the road and using multiple devices, having another form of two factor authentication is probably a wise idea. If one is using a few devices at home and at work, then using a security key as the only allowed method is workable.
A security key can be used for as many accounts as desired. And, multiple keys may be associated with the same account.
If one is using a key-only account, having a second key is highly desirable, in case the first is lost. These keys are quite rugged. They can survive washing machines, being stepped on and otherwise abused. The real concerns is forgetting to bring it or not being able to use it on a computer with inaccessible USB ports.
Once the decision of how the key will be used has been made, the setting up process is approximately the same as with any other two factor method. Turn on two factor authentication within the application, such as Gmail or Facebook, if it’s not on already, then add a device, press the circular button on the key when asked, and then press the circle on the key at the next login.
Single Factor ProtectionPasswords are considered so weak that some vendors are offering passwordless logins for users with U2F keys. Hackers using malware to capture the user’s password or to flood the system with password guesses, will not be able break in. As long as the owner keeps the key safe, the account should be safe from breaches of authentication. Windows 10 has recently supported the use of a key-only login. It is likely that other services will follow suit. However, a good password protects your accounts if your key is ever stolen. Two-factor is more secure. Whether using a device as the only form of protection will be enough is a personal decision. A middle ground is to use a security key that has a fingerprint reader built in. This not a true two factor solution but it does mean that another person would not be able to use the key.
CostsSecurity keys have to be purchased. Keys with Bluetooth and NFC can range from $40 to $60, but the simplest U2F keys can be found for $9.
CautionsLosing a key is obviously a risk if the attacker can discover the user’s password. Lost keys can be deactivated.
Most two factor systems allow the user to declare the device being used as a trusted device. Doing so means the user may not have to use a second factor on that device for 30 days. The key element is trust. A device, such as a laptop, should not be trusted given that it could be stolen. One of the advantages of a security key is that it’s so easy to use that a user may not feel a need to trust any device. Pressing the key at every login is quite simple and if there is any chance the computer would be stolen or that someone could access it, then not trusting the computer does not carry a significant burden.
Monday, November 05, 2018
What it isThe appropriate set of “Best practices” depends on the type of website being protected. One set of practices does not fit all. The precautions for a website offering public information is clearly going to require less attention than does a site that contains sensitive user and financial information. Once confidential information has been taken, no amount of effort can bring it back. This section has two sections. The first section, Basic Steps, applies to all websites and the second section, Advanced Steps, includes some of the considerations relevant to sites whose breaching would result in irreversible damage.
Designing for SecurityWebsite security is rarely a topic of conversation during a web design process, yet it should be. Security is difficult to retrofit after a site has been created. With public websites, hacking attacks can range from being merely embarrassing to being a source of malware infections for website visitors. The need for security is far higher when personal and/or financial information is involved. Once a person’s personal information has been taken, it’s never coming back. In many cases, if money is stolen, it too may never be restored. Being aware of what a website collects and stores should be a fundamental part of the website design process. Adding security to a website after it has been designed tends not to be effective.
- Keep your site simple. What a website doesn’t have cannot be exploited. Most informational sites can be turned into static pages. “Static” just means the page is always the same. Commercial sites require the services of a CMS (Content Management System) to dynamically build pages. Because the CMS is a primary avenue of attack, not having one represents a significant improvement in security.
- Update software. Most breaches are based on vulnerabilities already patched. Hackers take advantage of systems that have not been updated for months, if not for years.
- Have a backup. The most effective way to repair a hacked site is to restore a recent backup of the site.
- Use strong passwords. A common method for breaking into a website is to login as the administrator. Use a strong password and two factor verification, if available. Some systems will also block login attempts from foreign countries or from IP addresses not placed on the website’s access list.
- Use encryption. Websites with any non-public information should always be protected by an encrypted connection (HTTPS). The trend is for all websites to have encryption and digital certificates, but while it is still optional for informational sites, it should not be considered optional for sites that have any form of private information.
- Use a traffic filtering service. A typical firewall works by blocking access to unauthorized traffic, but a website must be reachable. Website firewall services filter traffic, rather than block it. This service entails the traffic going through a system that looks at all the packets of data. If it sees a pattern inside a packet or a pattern of packets representative of an attack, the service does not pass the traffic to the website. This is a very simplistic description of such services, and not all services are the same, but high value sites should have some sort of traffic filtering.
- Separation of data and website interface. Storing sensitive data on a second and isolated server can help protect the data in the case of the website being compromised. An attacker is forced to defeat two systems, not one.
- Anonymize and separate user data. Collect only the information the site needs. If you don’t need a person’s birthdate or government ID number, don’t ask for it. If you just want to know if the user is a child, youth or adult, ask that or for the year of birth. When a report does not need to use the person’s full information, anonymize and abstract. Keep the most sensitive information in a separate database protected with more stringent access controls.
- Get advice. A site with private information is always more difficult to secure and the ramifications of a breach are far more serious than they are for informational websites. Get expert advice on structuring the website for security. And then ask another expert to test the security of the site.
- Use external credit processors. When it comes to sensitive information processing, such as for credit card purchases, it’s best to use an established provider.
CautionsIt’s impossible have absolute security. Even if the technology was perfect – and it never is – people are not perfect. They make mistakes and can be tricked into granting access to attackers. Generally, however, an attacker is scanning large numbers of websites for possible attack. If the above steps are followed, it is probable that the attacker will focus on a less well protected site.
Friday, February 24, 2012
Build it and they may come
If people have a choice of whether or not to visit your new web site, they may decide the wait is simply too long. Standing in line at the grocery store we may roll our eyes when the cashier takes too long, but try the same situation on-line and the person "in line" simply hits the back arrow and finds a more responsive site. So, load times are in some ways as important as the quality of the content. The easier it would be for a visitor to find the same content or service elsewhere, the faster your site has to load.
As bad as it may seem that you are losing people to other sites due to slow load times, the problem becomes even worse if those people are finding your site via a Google search. Google measures all their user activity to determine what Google search users like and don't like. Things they like go up and things they don't like go down. When it comes to measuring search results, a click from the search results is seen as a valid indicator of value. But, when a user quickly returns to the search results to try the next link, Google sees that as evidence that the site at the end of the link was not valuable. And, as you might have concluded, the subsequent searches will list your site lower on the search results.
Don't evaluate a site under optimal conditions
One of the most common mistakes I've seen with inexperienced development teams is that the web site owner reviews the site on the developer's computer. It's a natural thing to do because the site has not been optimized or the web server may not have been prepared -- and all of these reasons are quite valid -- but if you don't look at the site under some sort of realistic conditions while developing the site, you may end up with a slow site that cannot be easily fixed. And before you announce your publicly, you certainly must test it out under a wide range of conditions.
I was involved in a situation where an office I was consultant to put up it's very first web page for an online learning program -- this was a long time ago -- and they did not check their new web site under normal conditions. They loved their new site because it looked so much better than all the other web sites. They had wall-to-wall intricately designed graphics. Today that accomplished in a reasonable way, but back in the web 1.0 days, there was no way to get such images into a design without very slow load times. And that's what they had but didn't know it. So they went to announce their new online learning program by purchasing a $50k+ full-page advertisement in the New York Times Sunday edition. Saturday night I got call saying the site was down and could I look at it. It didn't take long to see that the site was running but that the graphics were choking the system. What would have happened if large numbers of Times readers tried to access it on Sunday is yet another issue. It turns out they thought it was broken because the manager went to show his wife the new site and it would not come up. That was the first time the site had been reviewed on a computer other than that of the graphic designer's laptop. I was able to optimize the images and get it down to merely slow, but the lesson I saw that night is one I've seen various versions of many times since.
Even if a site loads well under realistic conditions, it's unlikely that you will have hundreds or thousands of simultaneous testers. A properly designed site can still come to a crawl if the server has limited capacity. Usually, it's a limitation of bandwidth that will slow the site first. Any decent server purchased within the last five years is going to have enough capacity to serve even the largest Internet connection. But as people divide their servers into multiple virtual servers, the capacity allocated to the web site might not be sufficient for heavy use. So one way to evaluate the capacity of the server and the Internet connection is to perform a stress test. The cheapest way is to get as many people as possible to use the site at the same time and then see what the levels of used resources are. Divide the number of people into the amount of resources to determine how many people could be served. This will give you a basic feel of how many users you might have, but there are so may variables that this number should be considered only an approximation. But it's not an easy calculation even if you do know how many resources a user typically uses. Users who return to your site multiple times may have your site's graphics in their browser cache. That means they take a lot of resources on the first visit but not on the subsequent visits because they are not downloading your banners and other graphics from your site. Users who come from some ISPs, such as AOL, may get images from their ISP's cache. In this case, it's not the first time a person visits but it may be the first time a person visits from a given ISP. All other users, even when it's their first visits, will be downloading your content from the ISP's cache and not from your site. Of course, this is good news for your site in terms of reducing strain, but, as you can see, calculating resources is never easy.
But, if you are on a fixed set of equipment and Internet connections, the most comprehensive review comes from using an external stress test firm. Such firms have servers located around the world and will attempt to reproduce varying levels of traffic from a number of geographically dispersed locations. They will also use different browsers to see if there are any browser-related bugs in your site's design.
The solution I like most is host the web site on a hosted platform where more capacity can be added instantly. Some providers allow a web site to burst up to a higher level of capacity than paid level. If the site consistently goes about the paid level, then the hosting provider will expect you to upgrade to the next service level. While this is a great solution, don't let this option make you lazy. A slow site may not choke the Internet connection of the hosting firm, but your users will still leave the site if is slow. No one is increasing their speed on their side.
Evaluate and Adjust
Many times the parts of the site that will be of most interest to users will be obvious and at other times a site may generate traffic to pages or parts of the site that were not envisioned to be of great interest. Because Google search results flattens a web site's hierarchy of content levels, you may find that a page three levels down in your site is more popular than the pages above it. There can be multiple reasons for this. Perhaps the higher level pages were not clear as how to find the content users really wanted and visitors use Google to find content on your site. But in other cases, your site may simply being supporting needs you did not realize your audience had -- or you found an audience that you didn't expect to be interested. If the cause is user confusion, fix the text and menu options. Web site statistics can be used to see the "path" users take as they click through the site. Look for results that don't make sense. If a large number of people are going to a page and not continuing on as would be expected, perhaps this indicates a problem. Consider bringing up the content or putting a link to it on the home page.
You should also ask yourself if the usage patterns you are seeing indicate a different form of interest than what the site was originally built to address. If so, adjust your site to meet these new needs -- assuming the new audience is of interest to your organization, of course. One of the lessons that's important to learn is that the goal is not just to make a site load faster; rather, it's to get the user to the desired content as quickly as possible.
Monday, December 05, 2011
1. Stay Focusd
Have you ever told yourself you would take a five-minute break to watch a video on YouTube but still find yourself glued to the monitor watching cat videos an hour later? Stay Focusd addresses this issue by providing you with an easy-to-use blacklist for sites that you find too distracting. You can set the maximum allowed time you want to be able to use these distractive sites in a 24-hour period and after that time expires, the extension will no longer let you browse those sites. It is easy to set up and is seamlessly designed for Google Chrome.
2. Focus Booster
The pomodoro technique has long been thought of as an easy means to give yourself a small break while getting more work done throughout the day. The general idea is to give yourself 55 minutes of work time and five minutes of break time so that you stay focused on the task at hand. Focus Booster is an Adobe Air application that runs on your desktop. You can customize the time to give yourself as much as 15 minutes of break time. Focus Booster also comes in the form of a web app called Focus Booster Live if you don't want to download anything to your desktop.
3. Read It Later
If browsing RSS feeds causes you to lose a lot of time during your day, then Read It Later is the perfect companion to minimizing distractions. If you find a page you want to save until later, you can install one of the many extensions for Firefox, Chrome, or Safari and click a small button to add the page to your Read It Later account. When you have time to read everything you saved throughout the day, simply open the page. Read It Later also offers iOS and Android apps and cleans up unnecessary clutter so you only see the content you want to see.
4. Mr. Number
One of the most annoying things about smartphones is that they can be distracting when you need to get work done. If you want to prevent being distracted by someone constantly calling you or sending you texts, Mr. Number can help. You can block any phone number for any period of time. Mr. Number is available for Android smartphones.
5. Shush! Ringtone Restorer
One of worst things about silencing your phone to avoid distractions is forgetting to turn it back on when it is needed. Shush! solves this problem by allowing you to silence your phone for a specific period of time. Simply use the volume rocker to silence the phone and a window will pop up asking how long you would like to keep the phone silenced. Once that time period has passed, the phone will return to its previous volume level. Shush! is available for Android phones.
Of course, minimizing the distractions you suffer while using your computer depends on your own willpower. Apps like Stay Focusd include methods to keep you from circumventing their settings such as having to type an entire paragraph of text error free, but sticking to your guns and using these tools as a supplement rather than a crutch is always the best way to get things done in a timely manner.
Wednesday, October 26, 2011
“M” stands for “mobile” and “PESA” is the Swahili word for money. M-PESA is literally a “mobile money” service that allows people to make deposits, withdrawals, and transfers without bank accounts. Many businesses have realized its potential and made it possible for customers to pay their bills via M-PESA.
M-PESA is currently available in several other countries, including Tanzania, Afghanistan, South Africa, India, and the UK. It provides a simple way for people abroad in these nations to send money to friends and family members at home.
The service has transformed the way people carry out their personal financing, particularly in the developing nations where more people have mobile phones than bank accounts. It's estimated about 300 million previously “unbanked” people will be using some type of mobile banking by 2012.
Although there are different types of mobile banking services, M-PESA has become more successful because it provided the core of what people really wanted: a way to send and receive money. Other systems offer standard banking services through the mobile phone, but M-PESA doesn't require users to have any bank accounts at all. All they need are their IDs and mobile phones to register for and use M-PESA services.
The initial concept of M-PESA was to provide microfinance institutions a means to collect loan payments easily. However, Vodafone and Safaricom soon realized that was a mistake and reconstructed the program, changing its slogan to “Send money home.” This gave them the opportunity to offer their customers what they truly wanted, which is the major reason for its success.
Before M-PESA, customers used to have quite a difficult time sending money to their rural homes. They would use such unreliable means as putting money in envelopes and sending it through the postal service, public transport systems, or with friends. Alternatively, they had to make personal trips, which was costly and time-wasting. With a vast network of about 28,000 agents in both urban and rural areas in Kenya, M-PESA has helped people do away with such methods and enabled easy access to their money almost anywhere.
M-PESA’s simple user interface coupled with free registration, free deposit, and no minimum balance has made it attractive to customers who previously may've shied away from banking entirely. Many people use the system for personal business transactions without worrying about carrying a lot of cash. With more than 14 million customers in Kenya, its success has prompted the World Bank to pick Michael Joseph, the former Safaricom CEO, to help replicate the model in other developing countries.
Elaine Hirsch is kind of a jack-of-all-interests, from education and history to medicine and videogames. This makes it difficult to choose just one life path, so she is currently working as a writer for various education-related sites and writing about all these things instead.
Wednesday, September 28, 2011
With the rise of text messaging among all cell phone users (not just young people), educators can recognize the value of this exercise in orthographic knowledge and ability. Some educators regard it as a major distraction, but many want to integrate a popular mode of communication with education and are realizing that text messaging can actually be used to boost learning. Recent studies have shown that texting in the classroom has a positive effect on students' ability to write lengthier and more creative papers in general.
More teachers are harnessing the vast power of texting as an educational tool in a variety of ways. For instance, teachers can have students send in their answers to projected questions for quizzes or discussion as text messages. This simple strategy turns students' phones into a simple and flexible classroom feedback system. Software used to receive text messages from students collects responses and may even give teachers options for managing and displaying collected data.
Student responses are then more easily gathered together and made accessible both to teachers and students themselves. This type of student input can be made anonymous, which can encourage students who might not otherwise participate to do so. By the same token, tracking specific students' responses gives teachers a clear view of who's struggling with comprehension and needs additional help, or who's getting ahead with the material and needs extra challenge.
This sort of constructive use of mobile technology can and should be used in all subject areas as a dynamic tool for learning. Taking advantage of technology students already have and will be using one way or another really is one of the most valuable means of harnessing that technology for educational ends. Whether teachers like it or not, technology is the future of schooling on all levels.
Indeed, cell phone use in the classroom isn't just limited to secondary schools: elementary teachers and students are also finding similar uses. For example, texting has led to the development of a new literary form known as the "cell phone novel." This type of literature is written entirely using text lingo, such as "Ur" instead of "you're." The novels are written in the form of a text message conversation between any number of interlocutors, include minimal punctuation, and are completely open to the shorthand that characterizes text messages. Young students can use this creative and entertaining form of writing to learn narrative structure and even grammatical mechanics (albeit without the usual focus on spelling and punctuation) much more effectively than they would with the rote drills usually reserved for such topics.
It may become increasingly important to take once distracting technologies like cell phones and harness their power for the benefit of the classroom. The transition may be fraught with bumps in the road, and the boundaries of technology use clearly drawn. Education needs to embrace technology, not simply to keep up with a new generation but to actually make good the promise technology holds for learning. Because of its prevalence and flexibility, many educators already advocate the use of cell phone technology in the classroom. Students need to benefit from the increased learning that will occur with well-considered use of technology, and feel validated for utilizing skills they are already confident with. Using cell phones in class can help both educators and students streamline the learning process and have a readily available tool for increasing class cooperation and participation.
Natalie Hunter grew up wanting to be a teacher, and is addicted to learning and research. As a result she is grateful for the invention of the internet because it allows her to spend some time outside, rather than just poring through books in a library. She is fascinated by the different methodologies for education at large today, and particularly by the advent of online education. She also loves to travel and learn via interaction with other people and cultures.
Monday, September 12, 2011
app for the iPhone that will allow blog post creation and editing. The user can upload and edit posts just as they would with a computer. The posts can include photos taken by the phone. It's not clear if the app can also handle videos taken by the camera. Given that Blogger handles video, it seems like the type of feature that if not there now can be expected in future releases.
The idea of making a post from a phone is not new. It's been possible to upload posts from smartphones using email for a long time. And, in a pinch one would have to use a smartphone's web browser to edit. Having a specific application makes the entire process easier.
The only issue that you need to consider if the process by which posts are published. If you trust your field reports, you can upload the posts and publish immediately. If there are good reasons for concern, you can have your reporters submit posts as drafts and then have the editor/moderator publish them.
Monday, June 27, 2011
Getting a digital certificate is becoming more important as browsers start expecting to see everything that is encrypted as needed a digital certificate. And, many things are expecting to be encrypted -- which is a good thing in terms of security. The digital certificate is what proves to your browser that it is created an encrypted link with the right computer. This is important because encrypted connections often contain sensitive information and one does not want to providing user names and passwords to faked site.
The bad news is that setting up a digital certificate is not an easy process. Most people have created "self-signed" digital certificates -- in fact it's almost impossible not to have self-signed certificates. But browsers don't accept self-signed certificates because there is no external proof that you are who you say you are. That's where a public digital certificate is required.
Below is a diagram of what steps are required to obtain and use a public digital certificate. In upcoming posts, I'll describe my experiences and challenges with getting and using my public certificate.
Wednesday, June 22, 2011
One of the problems with antivirus software is that once a bad piece of software gets on a computer, it can hide from the av software. This is especially true of root kits -- which dig under the operating system. The only way to find such deeply buried malware is to scan the computer's hard drive before it has been booted. That means booting from another device. System Sweeper comes from Microsoft as a bootable system. The image file one downloads is burned to a CD-ROM. The CD-ROM boots into a specialized version of Windows that has as its own function the running of System Sweeper. So far, most of the people I have given this disk to have come back reporting at least minor problems that they previously did not know about.
I suspect the main reason Microsoft does not make this important tool well known is because most users don't understand the idea of downloading ISO images and how to boot the CD-ROM drive. Because problems on your users' systems will come to haunt your network, it makes sense to make disks for your users and help them run this utility on their home computers.
Monday, August 23, 2010
Tuesday, May 25, 2010
Google has a useful set of tools for blocking cookies -- the type your web browser generates and not the nice ones you eat. First of all, let's set straight what a cookie is. It's a small text file created by your browser at the behest of the site you have visited -- or of a third-party's site somehow related to a site you've visited.
Tuesday, March 09, 2010
I recently had an overheating problem that caused the computer to crash after a few minutes. The solution from the computer vendor was for me to purchase a $75 dollar power supply. The power supply was fine; it was the fan that was the problem. I looked online and found a number of places that sold just the fan. I opened the power supply case -- be care because the capacitors can still store electricity after the unit has been unplugged. I was happy to find that the fan was plugged into the case for power. I did some measuring of the fan's size and looked at the number of pins. With that information, I found a replacement fan for $4.00 plus $2.95 shipping. One of the parts vendors I found was directron.com. There are many other sites, but I like their system for quickly finding the right fan.
Tuesday, January 12, 2010
Browsers are serious applications. They just don't show you a picture of some remove web page, they are interacting with the web site's server in a variety of ways and they have the power to interact with your computer in ways similar to that of your operating system. In fact, some computer magazines have called the browser the new operating system. Great power brings great danger. The more a program can do for you the more it do against you.
The question of which browser is the most secure is frequently debated. Google Chrome now makes that claim because it has a system for isolating the activity of the web pages from the operating system. This isolation, called "sand boxing" is a system used by a number of applications, such as Java. It helps but is no guarantee of security. IE, frequently described as the least secure, can be made secure if the recommendations of CERT are followed. The problem with these solutions is that the end result is a browser that may not be able to do many of the activities expected of it. The best solution, in my opinion, is to make the default settings in IE strong and add sites you know well to the "trusted" list. These sites can do more because the browser allows them to access features that an untrusted site would not be allowed to use. If you go to a wide variety of new web sites each day, you might want to use Chrome, but if you only go to a handful of the same sites each day, a locked-down version of IE might be the best solution.
Even if you dislike IE, you should still read the instructions for making it more secure. Love or hate IE, the reality is that there are number of sites that only function with it. There are times that you have no option but to use IE.
Of course, the best form of security is common sense. The brain is still the best form of security software.
Friday, January 08, 2010
There are a few things that can be annoying. The main one I have is that it keeps reporting a problem with a fairly unusual application -- in my case, a video camera monitor -- and offers a link to a fix. The fix is the same version as the problem. The video camera people probably will never update their application and given that their system running on my computer does not connect to the Internet, I'm not too worried. The solution I had to use is to "ignore" the application. I would rather not do that in the case there is actually a newer version at a later point.
Yet, given that most vulnerabilities are in the applications and not so much the OS, this tool is a must have. While most programs are getting better at auto-updating themselves, there are many applications out there that don't.
In the school environment, this system is also useful to discovering applications you might not have installed. When you seen an update pending for an application you didn't install or forgot to remove, this provides you a chance to take action. I like the fact that this scanner will tell you exactly where the offending application resides. Most updates don't do that.
I'm going to keep using this product!
Wednesday, November 11, 2009
Secunia is also very useful to finding applications you may not have even remembered installing. On my system it found an ancient version of WinZip. It clearly was not something I was checking for updates, yet many attachments come as zip files and WinZip will automatically start up to uncompress the zipped file. If the attachment was dangerous, WinZip would have been compromised. So, a program like Secunia is important to keeping track of all the potential entry points to your Windows computer.
Tuesday, November 03, 2009
SoundBible is a site that contains a large number of free-to-use short audio clips. While many of these sounds can be found elsewhere, this site puts them together in one place, provides a good description, and offers them in multiple formats. These sound files are great for student projects, such as a YouTube or Power Point. This site is also a great way to expose your students to working with audio files without having to work with huge files. They have fun and your lab still has hard drive space at the end of the class.
Friday, October 02, 2009
Friday, August 28, 2009
Snow Leopard, Apple's newest OS, is an interesting decision for educators. The school year is about ready to start and it's always easiest to do major upgrades during the vacation periods. Snow Leopard has a number of significant advantages that make it compelling to schools. The advantage I like the most is that the new OS is actually smaller than the current OS. If your drives are packed to the gills, then this move will be potentially quite interesting.
The problem is that not every application your school may be running will operate on Leopard -- or will at least take a re-install. If you are using older versions of software because you cannot afford the upgrade, you may locked out permanently. See Computer World's story on Adobe's statements about their software on Leopard. While Adobe's position is defensible on an economic basis, it's none too comforting to people stuck with old software.
Thursday, August 13, 2009
Thursday, August 06, 2009
Friday, June 26, 2009
Sugar and the applications that come with it have been extensively tested with school kids and are in wide-scale use around the world. You know you cannot upgrade your old system to Windows Vista or to OS X, so you have nothing to lose. There is no installation required. You download on to a USB device and set the PC's bios to boot from the device. Unplug the device and the system will boot from whatever is on the hard drive.
Friday, June 19, 2009
The posting covers a number of the new techniques that closes the doors on the most common avenues for attacks. Whether we like Windows or not, I think we can all agree that reducing the number of zombie computers slinging spam by the billions is a highly desirable goal.
What does all this mean to you? Basically, the answer to whether Windows is security or not must now be considered seriously and not just addressed by a derisive laugh.
Saturday, June 13, 2009
The most important aspect of video is audio. Unless you are video recording mime or interpretive dance, the audio is important. If you are providing instruction, the audio is critical. We'll cover audio at some later date.
More important than the quality of the camera is lighting. While a high-end camera might be able to correct for common mistakes, it's always better to avoid mistakes in the first place. 3d Rendering has a nice tutorial on how to set up an effective lighting system called "three point lighting." Basically, the idea is to light from the front with a strong light and use a second light at the subject's side so as to create some natural shadows. A back light creates a sense of distance from the back wall. While there are three-point lighting systems on the market, you might be able to get away with less if you have a reliable source of light coming in from a window. You may also be able to cut down on lights if you use a reflecting board to bounce light from where a second light might normally be placed.
The most important thing is to try lots of combinations to see what works best for your environment and subjects. For example, how you would light for a school play would be very different from an one-on-one interview. The concept of three-point lighting is the same but the equipment and angles would differ.
Good luck and have fun!
Sunday, May 31, 2009
One of the great difficulties with working on the cheap is that the software your school uses or would like to use is beyond your budget. While there are many places to find which Mac or Linux based program can replace a Windows program, there are not many sites that provide lists of alternative programs on the same OS. Alternativeto.net does just that.
Alternativeto.net lets you click on a program you use or might want to use to see what other programs do the same thing. The site is still quite basic, but given the lack of alternatives, this site is a lot better than nothing.
If anyone knows of better sites, please put them into the comments area.
Monday, May 04, 2009
One problem that MaximumPC did not cover is how to handle the situation where the blue screens sails past in a fraction of a second as part of endless loop of failed boots/restarts. My trick with such situations is to use a video camera to video the screen and then replay the recording until I can read the blue screen.
Friday, March 27, 2009
Streaming video is becoming more common all the time. Point-to-point video, such as Skype, is already huge. But what do you do when the subject to be video streamed is on a distance stage or in a dark room? Web cams were made for people sitting in front of a computer. To my surprise, modern video cameras do not stream video nor do many of the "YouTube" video cameras. I did find a solution but it was not an obvious route: the Sony DCR-HC38.
The newer video cameras can transfer files to PCs and allow for editing of content located on the camera, but very few can stream. Even "YouTube" cameras, such as the Flip, don't stream. While YouTube does not offer live streaming, it does allow live recording. The Flip cannot not handle even this mode. Perhaps Cisco will add this capability now that they have purchased Flip.
I looked at top-end web cams and they simply don't have the optical zoom and steady-shot features I wanted. If all you want to do is to sit in front of your PC and Skype someone, a web cam is fine. However, if you want to stream a performance on a stage, a speaker at a distant podium, or a sporting event, webcams don't cut it. Sony has a new "Webbie" camera, but it doesn't have streaming.
The good news is that some older video cameras can handle web streaming -- and those cameras are now inexpensive as vendors discontinue older models. The unit I found is the Sony Handycam DCR-HC38 ($180). You'll need to contact Sony support for new USB drivers not found on the install CDs. The USB works with XP. If you want to run this on a Mac, you'll need to use the i-link (i.e. firewire connection). I have not tried out the firewire connection with my Mac, but I have used the USB on my XP laptop using Skype and one of my streaming services, Mogulus.
Mogulus is a great service for broadcasting public content. The most significant downside is that they don't have mechanisms for restricting access to a broadcast. Their paid-for offering does have privacy settings but it's far too expensive for non-commercial use. Kyte, another streaming service, does offer private broadcasts, but it's limited in terms of functionality as compared to Mogulus and it has a 60 minutes limit. That's a lot in YouTube terms, but not enough for many meetings/events. I'm now exploring Justin.tv. It does not have time limits and it allows for free private broadcasts -- with commercials. I have not used it as yet, so it may have downsides I don't yet know about.If you think you might want to do video streaming, buy one of these discontinued cameras before new and probably much more expensive cameras hit the market. Yes, the new cameras will have better video quality -- but the extra quality will not translate to the image seen via the Internet.