Search Results

Search found 9715 results on 389 pages for 'bad passwords'.

Page 59/389 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • How to Recover From a Virus Infection: 3 Things You Need to Do

    - by Chris Hoffman
    If your computer becomes infected with a virus or another piece of malware, removing the malware from your computer is only the first step. There’s more you need to do to ensure you’re secure. Note that not every antivirus alert is an actual infection. If your antivirus program catches a virus before it ever gets a chance to run on your computer, you’re safe. If it catches the malware later, you have a bigger problem. Change Your Passwords You’ve probably used your computer to log into your email, online banking websites, and other important accounts. Assuming you had malware on your computer, the malware could have logged your passwords and uploaded them to a malicious third party. With just your email account, the third party could reset your passwords on other websites and gain access to almost any of your online accounts. To prevent this, you’ll want to change the passwords for your important accounts — email, online banking, and whatever other important accounts you’ve logged into from the infected computer. You should probably use another computer that you know is clean to change the passwords, just to be safe. When changing your passwords, consider using a password manager to keep track of strong, unique passwords and two-factor authentication to prevent people from logging into your important accounts even if they know your password. This will help protect you in the future. Ensure the Malware Is Actually Removed Once malware gets access to your computer and starts running, it has the ability to do many more nasty things to your computer. For example, some malware may install rootkit software and attempt to hide itself from the system. Many types of Trojans also “open the floodgates” after they’re running, downloading many different types of malware from malicious web servers to the local system. In other words, if your computer was infected, you’ll want to take extra precautions. You shouldn’t assume it’s clean just because your antivirus removed what it found. It’s probably a good idea to scan your computer with multiple antivirus products to ensure maximum detection. You may also want to run a bootable antivirus program, which runs outside of Windows. Such bootable antivirus programs will be able to detect rootkits that hide themselves from Windows and even the software running within Windows. avast! offers the ability to quickly create a bootable CD or USB drive for scanning, as do many other antivirus programs. You may also want to reinstall Windows (or use the Refresh feature on Windows 8) to get your computer back to a clean state. This is more time-consuming, especially if you don’t have good backups and can’t get back up and running quickly, but this is the only way you can have 100% confidence that your Windows system isn’t infected. It’s all a matter of how paranoid you want to be. Figure Out How the Malware Arrived If your computer became infected, the malware must have arrived somehow. You’ll want to examine your computer’s security and your habits to prevent more malware from slipping through in the same way. Windows is complex. For example, there are over 50 different types of potentially dangerous file extensions that can contain malware to keep track of. We’ve tried to cover many of the most important security practices you should be following, but here are some of the more important questions to ask: Are you using an antivirus? – If you don’t have an antivirus installed, you should. If you have Microsoft Security Essentials (known as Windows Defender on Windows 8), you may want to switch to a different antivirus like the free version of avast!. Microsoft’s antivirus product has been doing very poorly in tests. Do you have Java installed? – Java is a huge source of security problems. The majority of computers on the Internet have an out-of-date, vulnerable version of Java installed, which would allow malicious websites to install malware on your computer. If you have Java installed, uninstall it. If you actually need Java for something (like Minecraft), at least disable the Java browser plugin. If you’re not sure whether you need Java, you probably don’t. Are any browser plugins out-of-date? – Visit Mozilla’s Plugin Check website (yes, it also works in other browsers, not just Firefox) and see if you have any critically vulnerable plugins installed. If you do, ensure you update them — or uninstall them. You probably don’t need older plugins like QuickTime or RealPlayer installed on your computer, although Flash is still widely used. Are your web browser and operating system set to automatically update? – You should be installing updates for Windows via Windows Update when they appear. Modern web browsers are set to automatically update, so they should be fine — unless you went out of your way to disable automatic updates. Using out-of-date web browsers and Windows versions is dangerous. Are you being careful about what you run? – Watch out when downloading software to ensure you don’t accidentally click sketchy advertisements and download harmful software. Avoid pirated software that may be full of malware. Don’t run programs from email attachments. Be careful about what you run and where you get it from in general. If you can’t figure out how the malware arrived because everything looks okay, there’s not much more you can do. Just try to follow proper security practices. You may also want to keep an extra-close eye on your credit card statement for a while if you did any online-shopping recently. As so much malware is now related to organized crime, credit card numbers are a popular target.     

    Read the article

  • Streaming binary data to WCF rest service gives Bad Request (400) when content length is greater than 64k

    - by Mikey Cee
    I have a WCF service that takes a stream: [ServiceContract] public class UploadService : BaseService { [OperationContract] [WebInvoke(BodyStyle=WebMessageBodyStyle.Bare, Method=WebRequestMethods.Http.Post)] public void Upload(Stream data) { // etc. } } This method is to allow my Silverlight application to upload large binary files, the easiest way being to craft the HTTP request by hand from the client. Here is the code in the Silverlight client that does this: const int contentLength = 64 * 1024; // 64 Kb var request = (HttpWebRequest)WebRequest.Create("http://localhost:8732/UploadService/"); request.AllowWriteStreamBuffering = false; request.Method = WebRequestMethods.Http.Post; request.ContentType = "application/octet-stream"; request.ContentLength = contentLength; using (var outputStream = request.GetRequestStream()) { outputStream.Write(new byte[contentLength], 0, contentLength); outputStream.Flush(); using (var response = request.GetResponse()); } Now, in the case above, where I am streaming 64 kB of data (or less), this works OK and if I set a breakpoint in my WCF method, and I can examine the stream and see 64 kB worth of zeros - yay! The problem arises if I send anything more than 64 kB of data, for instance by changing the first line of my client code to the following: const int contentLength = 64 * 1024 + 1; // 64 kB + 1 B This now throws an exception when I call request.GetResponse(): The remote server returned an error: (400) Bad Request. In my WCF configuration I have set maxReceivedMessageSize, maxBufferSize and maxBufferPoolSize to 2147483647, but to no avail. Here are the relevant sections from my service's app.config: <service name="UploadService"> <endpoint address="" binding="webHttpBinding" bindingName="StreamedRequestWebBinding" contract="UploadService" behaviorConfiguration="webBehavior"> <identity> <dns value="localhost" /> </identity> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8732/UploadService/" /> </baseAddresses> </host> </service> <bindings> <webHttpBinding> <binding name="StreamedRequestWebBinding" bypassProxyOnLocal="true" useDefaultWebProxy="false" hostNameComparisonMode="WeakWildcard" sendTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:05:00" maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" transferMode="StreamedRequest"> <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" /> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="webBehavior"> <webHttp /> </behavior> <endpointBehaviors> </behaviors> How do I make my service accept more than 64 kB of streamed post data?

    Read the article

  • Install 32-bit gstreamer plugins on 64-bit

    - by Rua
    I am trying to install the 32-bit gstreamer plugins on my 64-bit system (Ubuntu 12.10 based). I can install the packages gstreamer0.10-plugins-base:i386 and gstreamer0.10-plugins-good:i386. However, gstreamer0.10-plugins-bad:i386, gstreamer0.10-plugins-bad-multiverse:i386 and gstreamer0.10-plugins-ugly:i386 conflict with 64-bit packages already installed on my system: $ sudo apt-get install gstreamer0.10-plugins-bad:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-plugins-bad:i386 : Depends: libass4:i386 (>= 0.9.7) but it is not going to be installed Depends: libdca0:i386 but it is not going to be installed Depends: libdvdnav4:i386 (>= 4.2.0+20120524) but it is not going to be installed Depends: libdvdread4:i386 but it is not going to be installed Depends: libslv2-9:i386 (>= 0.6.4-1~) but it is not going to be installed E: Unable to correct problems, you have held broken packages. ... $ sudo apt-get install gstreamer0.10-plugins-bad-multiverse:i386 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libavcodec53:i386 libavutil51:i386 libfaac0:i386 libfaad2:i386 libgsm1:i386 libmjpegtools-1.9:i386 libmp3lame0:i386 libquicktime2:i386 libschroedinger-1.0-0:i386 libswscale2:i386 libva1:i386 libvpx1:i386 libx264-123:i386 libxvidcore4:i386 The following packages will be REMOVED: gstreamer0.10-plugins-bad-multiverse libfaac0 libmjpegtools-1.9 mint-meta-codecs The following NEW packages will be installed: gstreamer0.10-plugins-bad-multiverse:i386 libavcodec53:i386 libavutil51:i386 libfaac0:i386 libfaad2:i386 libgsm1:i386 libmjpegtools-1.9:i386 libmp3lame0:i386 libquicktime2:i386 libschroedinger-1.0-0:i386 libswscale2:i386 libva1:i386 libvpx1:i386 libx264-123:i386 libxvidcore4:i386 0 upgraded, 15 newly installed, 4 to remove and 5 not upgraded. Need to get 9,198 kB of archives. After this operation, 23.3 MB of additional disk space will be used. Do you want to continue [Y/n]? ... $ sudo apt-get install gstreamer0.10-plugins-ugly:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-plugins-ugly:i386 : Depends: libdvdread4:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. This means that I can't play mp3s (among other things) in 32-bit applications that use gstreamer. Is there a way around this?

    Read the article

  • Circular database relationships. Good, Bad, Exceptions?

    - by jim
    I have been putting off developing this part of my app for sometime purely because I want to do this in a circular way but get the feeling its a bad idea from what I remember my lecturers telling me back in school. I have a design for an order system, ignoring the everything that doesn't pertain to this example I'm left with: CreditCard Customer Order I want it so that, Customers can have credit cards (0-n) Customers have orders (1-n) Orders have one customer(1-1) Orders have one credit card(1-1) Credit cards can have one customer(1-1) (unique ids so we can ignore uniqueness of cc number, husband/wife may share cc instances ect) Basically the last part is where the issue shows up, sometimes credit cards are declined and they wish to use a different one, this needs to update which their 'current' card is but this can only change the current card used for that order, not the other orders the customer may have on disk. Effectively this creates a circular design between the three tables. Possible solutions: Either Create the circular design, give references: cc ref to order, customer ref to cc customer ref to order or customer ref to cc customer ref to order create new table that references all three table ids and put unique on the order so that only one cc may be current to that order at any time Essentially both model the same design but translate differently, I am liking the latter option best at this point in time because it seems less circular and more central. (If that even makes sense) My questions are, What if any are the pros and cons of each? What is the pitfalls of circular relationships/dependancies? Is this a valid exception to the rule? Is there any reason I should pick the former over the latter? Thanks and let me know if there is anything you need clarified/explained. --Update/Edit-- I have noticed an error in the requirements I stated. Basically dropped the ball when trying to simplify things for SO. There is another table there for Payments which adds another layer. The catch, Orders can have multiple payments, with the possibility of using different credit cards. (if you really want to know even other forms of payment). Stating this here because I think the underlying issue is still the same and this only really adds another layer of complexity.

    Read the article

  • Is it a good or bad practice to call instance methods from a java constructor?

    - by Steve
    There are several different ways I can initialize complex objects (with injected dependencies and required set-up of injected members), are all seem reasonable, but have various advantages and disadvantages. I'll give a concrete example: final class MyClass { private final Dependency dependency; @Inject public MyClass(Dependency dependency) { this.dependency = dependency; dependency.addHandler(new Handler() { @Override void handle(int foo) { MyClass.this.doSomething(foo); } }); doSomething(0); } private void doSomething(int foo) { dependency.doSomethingElse(foo+1); } } As you can see, the constructor does 3 things, including calling an instance method. I've been told that calling instance methods from a constructor is unsafe because it circumvents the compiler's checks for uninitialized members. I.e. I could have called doSomething(0) before setting this.dependency, which would have compiled but not worked. What is the best way to refactor this? Make doSomething static and pass in the dependency explicitly? In my actual case I have three instance methods and three member fields that all depend on one another, so this seems like a lot of extra boilerplate to make all three of these static. Move the addHandler and doSomething into an @Inject public void init() method. While use with Guice will be transparent, it requires any manual construction to be sure to call init() or else the object won't be fully-functional if someone forgets. Also, this exposes more of the API, both of which seem like bad ideas. Wrap a nested class to keep the dependency to make sure it behaves properly without exposing additional API:class DependencyManager { private final Dependency dependency; public DependecyManager(Dependency dependency) { ... } public doSomething(int foo) { ... } } @Inject public MyClass(Dependency dependency) { DependencyManager manager = new DependencyManager(dependency); manager.doSomething(0); } This pulls instance methods out of all constructors, but generates an extra layer of classes, and when I already had inner and anonymous classes (e.g. that handler) it can become confusing - when I tried this I was told to move the DependencyManager to a separate file, which is also distasteful because it's now multiple files to do a single thing. So what is the preferred way to deal with this sort of situation?

    Read the article

  • The remote server returned an error: (400) Bad Request - uploading less 2MB file size?

    - by fiberOptics
    The file succeed to upload when it is 2KB or lower in size. The main reason why I use streaming is to be able to upload file up to at least 1 GB. But when I try to upload file with less 1MB size, I get bad request. It is my first time to deal with downloading and uploading process, so I can't easily find the cause of error. Testing part: private void button24_Click(object sender, EventArgs e) { try { OpenFileDialog openfile = new OpenFileDialog(); if (openfile.ShowDialog() == System.Windows.Forms.DialogResult.OK) { string port = "3445"; byte[] fileStream; using (FileStream fs = new FileStream(openfile.FileName, FileMode.Open, FileAccess.Read, FileShare.Read)) { fileStream = new byte[fs.Length]; fs.Read(fileStream, 0, (int)fs.Length); fs.Close(); fs.Dispose(); } string baseAddress = "http://localhost:" + port + "/File/AddStream?fileID=9"; HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(baseAddress); request.Method = "POST"; request.ContentType = "text/plain"; //request.ContentType = "application/octet-stream"; Stream serverStream = request.GetRequestStream(); serverStream.Write(fileStream, 0, fileStream.Length); serverStream.Close(); using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { int statusCode = (int)response.StatusCode; StreamReader reader = new StreamReader(response.GetResponseStream()); } } } catch (Exception ex) { MessageBox.Show(ex.Message); } } Service: [WebInvoke(UriTemplate = "AddStream?fileID={fileID}", Method = "POST", BodyStyle = WebMessageBodyStyle.Bare)] public bool AddStream(long fileID, System.IO.Stream fileStream) { ClasslLogic.FileComponent svc = new ClasslLogic.FileComponent(); return svc.AddStream(fileID, fileStream); } Server code for streaming: namespace ClasslLogic { public class StreamObject : IStreamObject { public bool UploadFile(string filename, Stream fileStream) { try { FileStream fileToupload = new FileStream(filename, FileMode.Create); byte[] bytearray = new byte[10000]; int bytesRead, totalBytesRead = 0; do { bytesRead = fileStream.Read(bytearray, 0, bytearray.Length); totalBytesRead += bytesRead; } while (bytesRead > 0); fileToupload.Write(bytearray, 0, bytearray.Length); fileToupload.Close(); fileToupload.Dispose(); } catch (Exception ex) { throw new Exception(ex.Message); } return true; } } } Web config: <system.serviceModel> <bindings> <basicHttpBinding> <binding> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="2097152" maxBytesPerRead="4096" maxNameTableCharCount="2097152" /> <security mode="None" /> </binding> <binding name="ClassLogicBasicTransfer" closeTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:15:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="67108864" maxReceivedMessageSize="67108864" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="67108864" maxBytesPerRead="4096" maxNameTableCharCount="67108864" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> <binding name="BaseLogicWSHTTP"> <security mode="None" /> </binding> <binding name="BaseLogicWSHTTPSec" /> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" aspNetCompatibilityEnabled="true" /> </system.serviceModel> I'm not sure if this affects the streaming function, because I'm using WCF4.0 rest template which config is dependent in Global.asax. One more thing is this, whether I run the service and passing a stream or not, the created file always contain this thing. How could I remove the "NUL" data? Thanks in advance. Edit public bool UploadFile(string filename, Stream fileStream) { try { FileStream fileToupload = new FileStream(filename, FileMode.Create); byte[] bytearray = new byte[10000]; int bytesRead, totalBytesRead = 0; do { bytesRead = fileStream.Read(bytearray, totalBytesRead, bytearray.Length - totalBytesRead); totalBytesRead += bytesRead; } while (bytesRead > 0); fileToupload.Write(bytearray, 0, totalBytesRead); fileToupload.Close(); fileToupload.Dispose(); } catch (Exception ex) { throw new Exception(ex.Message); } return true; }

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • Managed Service Accounts (MSA) and Virtual Accounts

    Windows Server 2008 R2 and Windows 7 have two new types of service accounts called Manage Service Accounts (MSA) and Virtual Accounts.  These make long term management of service account users, passwords and SPNs much easier. Consider the environment at OrcsWeb.  As a PCI Compliant hosting company, we need to change all security related passwords every 3 months.  This is a substantial undertaking each time because of hundreds of passwords spread throughout our enterprise.  We...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Copying logins to another server

    - by DavidWimbush
    I'm busy setting up a new server to replace our main live server and part of that is to get the logins copied over. The database users will come over when I restore the databases but I wanted to get the logins they relate to, with the same SIDs, passwords and other properties as they have on the current server. In fact I don't even know the passwords for the logins created by our Sage accounting package - apparently they are generated by the setup using a number of ingredients unique to each installation. I did some Googling and fount this KB article: http://support.microsoft.com/kb/918992/, which more or less did the trick. It produces a set of CREATE LOGIN statements with the SIDs and hashed passwords. But it didn't include the default language, which can subtly or dramatically alter the behaviour of date-related SQL. So I added that bit and you can help yourself here.

    Read the article

  • No more: "What was my password again? Was it 12345 or 123456?"

    - by hinkmond
    Keep track of all your passwords with this Java ME password tracker on your Java feature phone. See: Java ME KeePassMobile Here's a quote: You can put all your passwords in one database, which is locked with one master key and/or a key file. ... KeePassMobile is a password manager software for mobile phones (J2ME platform) that is compatible to KeePass. With KeePassMobile you are able to store all your passwords in a highly-encrypted KeePass (1.x*) database on your mobile phone and view them on the go! Don't leave home without it! And, don't forget your master password either, because if you do... you're pretty much fried with Y-rays. Hinkmond

    Read the article

  • The Unintended Consequences of Sound Security Policy

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Author: Kevin Moulton, CISSP, CISM Meet the Author: Kevin Moulton, Senior Sales Consulting Manager, Oracle Kevin Moulton, CISSP, CISM, has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East Enterprise Security Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. When I speak to a room of IT administrators, I like to begin by asking them if they have implemented a complex password policy. Generally, they all nod their heads enthusiastically. I ask them if that password policy requires long passwords. More nodding. I ask if that policy requires upper and lower case letters – faster nodding – numbers – even faster – special characters – enthusiastic nodding all around! I then ask them if their policy also includes a requirement for users to regularly change their passwords. Now we have smiles with the nodding! I ask them if the users have different IDs and passwords on the many systems that they have access to. Of course! I then ask them if, when they walk around the building, they see something like this: Thanks to Jake Ludington for the nice example. Can these administrators be faulted for their policies? Probably not but, in the end, end-users will find a way to get their job done efficiently. Post-It Notes to the rescue! I was visiting a business in New York City one day which was a perfect example of this problem. First I walked up to the security desk and told them where I was headed. They asked me if they should call upstairs to have someone escort me. Is that my call? Is that policy? I said that I knew where I was going, so they let me go. Having the conference room number handy, I wandered around the place in a search of my destination. As I walked around, unescorted, I noticed the post-it note problem in abundance. Had I been so inclined, I could have logged in on almost any machine and into any number of systems. When I reached my intended conference room, I mentioned my post-it note observation to the two gentlemen with whom I was meeting. One of them said, “You mean like this,” and he produced a post it note full of login IDs and passwords from his breast pocket! I gave him kudos for not hanging the list on his monitor. We then talked for the rest of the meeting about the difficulties faced by the employees due to the security policies. These policies, although well-intended, made life very difficult for the end-users. Most users had access to 8 to 12 systems, and the passwords for each expired at a different times. The post-it note solution was understandable. Who could remember even half of them? What could this customer have done differently? I am a fan of using a provisioning system, such as Oracle Identity Manager, to manage all of the target systems. With OIM, and email could be automatically sent to all users when it was time to change their password. The end-users would follow a link to change their password on a web page, and then OIM would propagate that password out to all of the systems that the user had access to, even if the login IDs were different. Another option would be an Enterprise Single-Sign On Solution. With Oracle eSSO, all of a user’s credentials would be stored in a central, encrypted credential store. The end-user would only have to login to their machine each morning and then, as they moved to each new system, Oracle eSSO would supply the credentials. Good-bye post-it notes! 3M may be disappointed, but your end users will thank you. I hear people say that this post-it note problem is not a big deal, because the only people who would see the passwords are fellow employees. Do you really know who is walking around your building? What are the password policies in your business? How do the end-users respond?

    Read the article

  • How can I disable DNSSC for Google Apps (GMail) MX records on my authoritative domains?

    - by meinemitternacht
    I'm running a BIND Master / Slave setup with DNSSEC, but some of my domains use Google Apps for e-mail services. Google doesn't support DNSSEC and BIND doesn't like it at all. Log output: Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ALT2.ASPMX.L.GOOGLE.COM AAAA: bad cache hit (ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/AAAA/IN': 69.147.224.178#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755ca52c30: ALT2.ASPMX.L.GOOGLE.COM A: bad cache hit (ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/A/IN': 69.147.224.178#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755ca52c30: ASPMX2.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ASPMX2.GOOGLEMAIL.COM A: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f754c1b0bd0: ASPMX2.GOOGLEMAIL.COM A: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f754c1a6a30: ASPMX2.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ASPMX3.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX3.GOOGLEMAIL.COM.dlv.isc.org/DLV) I'm not absolutely sure this is stopping Google Apps from working, because I just enabled all of the DNSSEC features. Does anyone here have experience with this?

    Read the article

  • java.lang.UnsupportedClassVersionError: Bad version number in .class file?

    - by grmn.bob
    I am getting this error when I include an opensource library that I had to compile from source. Now, all the suggestions on the web indicate that the code was compiled in one version and executed in another version (new on old). However, I only have one version of JRE on my system. If I run the commands: $ javac -version javac 1.5.0_18 $ java -version java version "1.5.0_18" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_18-b02) Java HotSpot(TM) Server VM (build 1.5.0_18-b02, mixed mode) and check in Eclipse for the properties of the java library, I get 1.5.0_18 Therefore, I have to conclude something else, internal to a class itself, is throwing the exception?? Is that even possible?

    Read the article

  • Is this so bad when using MySQL queries in PHP?

    - by alex
    I need to update a lot of rows, per a user request. It is a site with products. I could... Delete all old rows for that product, then loop through string building a new INSERT query. This however will lose all data if the INSERT fails. Perform an UPDATE through each loop. This loop currently iterates over 8 items, but in the future it may get up to 15. This many UPDATEs doesn't sound like too good an idea. Change DB Schema, and add an auto_increment Id to the rows. Then first do a SELECT, get all old rows ids in a variable, perform one INSERT, and then a DELETE WHERE IN SET. What is the usual practice here? Thanks

    Read the article

  • Which functions in the C standard library commonly encourage bad practice?

    - by Ninefingers
    Hello all, This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of. Specifically, to quote R.. strncpy does not null-terminate, and does null-pad the whole remainder of the destination buffer, which is a huge waste of time. You can work around the former by adding your own null padding, but not the latter. It was never intended for use as a "safe string handling" function, but for working with fixed-size fields in Unix directory tables and database files. snprintf(dest, n, "%s", src) is the only correct "safe strcpy" in standard C, but it's likely to be a lot slower. By the way, truncation in itself can be a major bug and in some cases might lead to privilege elevation or DoS, so throwing "safe" string functions that truncate their output at a problem is not a way to make it "safe" or "secure". Instead, you should ensure that the destination buffer is the right size and simply use strcpy (or better yet, memcpy if you already know the source string length). And from Jonathan Leffler Note that strncat() is even more confusing in its interface than strncpy() - what exactly is that length argument, again? It isn't what you'd expect based on what you supply strncpy() etc - so it is more error prone even than strncpy(). For copying strings around, I'm increasingly of the opinion that there is a strong argument that you only need memmove() because you always know all the sizes ahead of time and make sure there's enough space ahead of time. Use memmove() in preference to any of strcpy(), strcat(), strncpy(), strncat(), memcpy(). So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question: What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies? In the interests of objectivity, I have a number of criteria for an answer: Please, if you can, cite design reasons behind the function in question i.e. its intended purpose. Please highlight the misuse to which the code is currently put. Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers. Please avoid: Debates over naming conventions of functions (except where this unequivocably causes confusion). "I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them. As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away. I am also working as per C99.

    Read the article

  • Does constantly checking the documentation make you a bad coder?

    - by cdburgess
    When writing PHP code for any given project, do you find you can write code off the top of your head? Or do you make multiple round trips to php.net? If it is the later, can you still be considered a good coder. This is a legitimate question as I find I have difficulty always remembering all of the functions that are available to me so I find I use php.net as a crutch. Is there anyway to improve this?

    Read the article

  • Pseudorandom crashes in Flash Debugger - My bad, or Abode's?

    - by rinogo
    I'm working on a large-size dual AS3/Flex project (some parts are pure AS3, other parts are Flex), and I'm experiencing a lot of Flash Debugger crashes. These crashes aren't completely random - it seems like I can get them to occur with greater consistency when I perform certain actions in my app. However, at the same time, they aren't consistently repeatable - sometimes a set of actions causes my app to crash, and other times, the same steps execute fine without a crash. I have two questions (carefully worded to remove my personal bias :) ) Are these crashes due to my coding practices, or Adobe's Flash Debugger? When I deploy my app on a web site and access it via Flash Player, should I expect the same crashes to occur, or is Flash Player considerably more resilient than Flash Debugger? Thanks so much, all! :) -Rich

    Read the article

  • Using PostRequestHandlerExecute, Flush and Close to clean up after a request - why is this bad?

    - by Erwin
    After some requests I need to clean up after the user - by calling a remote web service to release some resources if I guess the user doesn't need them anymore. It is ok to leave them hanging and letting them time out on the other server, but the polite thing to do is to inform it that I do not need them anymore. I do not want to waste the users time waiting for cleaning - so I tried to find place to put it. First I tried Application_EndRequest, but I needed something later. Then I found PostRequestHandlerExecute which seemed like a nice place, but I still need a Flush and Close to release the connection to the user. Protected Sub Application_PostRequestHandlerExecute(ByVal sender As Object, ByVal e As System.EventArgs) Response.Flush() Response.Close() ' Simulation of clean up activity: System.Threading.Thread.Sleep(4000) ' Really a couple of web service calls End Sub Is there some other place I could put these lengthy clean up routines?

    Read the article

  • How bad is code using std::basic_string<t> as a contiguous buffer?

    - by BillyONeal
    I know technically the std::basic_string template is not required to have contiguous memory. However, I'm curious how many implementations exist for modern compilers that actually take advantage of this freedom. For example, if one wants code like the following it seems silly to allocate a vector just to turn around instantly and return it as a string: DWORD valueLength = 0; DWORD type; LONG errorCheck = RegQueryValueExW( hWin32, value.c_str(), NULL, &type, NULL, &valueLength); if (errorCheck != ERROR_SUCCESS) WindowsApiException::Throw(errorCheck); else if (valueLength == 0) return std::wstring(); std::wstring buffer; do { buffer.resize(valueLength/sizeof(wchar_t)); errorCheck = RegQueryValueExW( hWin32, value.c_str(), NULL, &type, &buffer[0], &valueLength); } while (errorCheck == ERROR_MORE_DATA); if (errorCheck != ERROR_SUCCESS) WindowsApiException::Throw(errorCheck); return buffer; I know code like this might slightly reduce portability because it implies that std::wstring is contiguous -- but I'm wondering just how unportable that makes this code. Put another way, how may compilers actually take advantage of the freedom having noncontiguous memory allows? Oh: And of course given what the code's doing this only matters for Windows compilers.

    Read the article

  • Globals are bad! But should I use them in this context?

    - by Matt
    Would setting the $link to my database be one thing that I should use a GLOBAL scope for? In my setting of (lots of functions)...it seems as though having only one variable that is in the global scope would be wise. I am currently using the functions to transfer it back and forth so that way I do not have it in the global scope. But it is a bit of a hindrance to my script. Please advise.

    Read the article

  • The same class file in multiple .jar files. How bad is this?

    - by Kannan Goundan
    I have a library that writes data in either a text or binary format. It has the following three components: common data structures text writer (depends on 1) binary writer (depends on 1) The obvious way to distribute this is as 3 .jar files, so that users can include only what they need. However, the "common data structures" component is really just two small classes so I'm considering creating only two .jar files and including the common .class files in both. My question: What are the potential problems with doing this?

    Read the article

  • Java Interface Usage Guidelines -- Are getters and setters in an interface bad?

    - by user68759
    What do people think of the best guidelines to use in an interface? What should and shouldn't go into an interface? I've heard people say that, as a general rule, an interface must only define behavior and not state. Does this mean that an interface shouldn't contain getters and setters? My opinion: Maybe not so for setters, but sometimes I think that getters are valid to be placed in an interface. This is merely to enforce the implementation classes to implement those getters and so to indicate that the clients are able to call those getters to check on something, for example.

    Read the article

  • Bad linking in Qt unit test -- missing the link to the moc file?

    - by dwj
    I'm trying to unit test a class that inherits QObject; the class itself is located up one level in my directory structure. When I build the unit test I get the standard unresolved errors if a class' MOC file cannot be found: test.obj : error LNK2001: unresolved external symbol "public: virtual void * __thiscall UnitToTest::qt_metacast(char const *)" (?qt_metacast@UnitToTest@@UAEPAXPBD@Z) + 2 missing functions The MOC file is created but appears to not be linking. I've been poking around SO, the web, and Qt's docs for quite a while and have hit a wall. How do I get the unit test to include the MOC file in the link? ==== My project file is dead simple: TEMPLATE = app TARGET = test DESTDIR = . CONFIG += qtestlib INCLUDEPATH += . .. DEPENDPATH += . HEADERS += test.h SOURCES += test.cpp ../UnitToTest.cpp stubs.cpp DEFINES += UNIT_TEST My directory structure and files: C:. | UnitToTest.cpp | UnitToTest.h | \---test | test.cpp (Makefiles removed for clarity) | test.h | test.pro | stubs.cpp | +---debug | UnitToTest.obj | test.obj | test.pdb | moc_test.cpp | moc_test.obj | stubs.obj Edit: Additional information The generated Makefile.Debug shows the moc file missing: SOURCES = test.cpp \ ..\test.cpp \ stubs.cpp debug\moc_test.cpp OBJECTS = debug\test.obj \ debug\UnitToTest.obj \ debug\stubs.obj \ debug\moc_test.obj

    Read the article

  • Why calling Process.killProcess(Process.myPid()) is a bad idea?

    - by Tal Kanel
    I've read some posts saying using this method is "not good", shouldn't been use, it's not the right way to "close" the application and it's not how android works... I understand and accept the fact that Android OS knows better then me when it's the right time to terminate the process, but I didn't heard yet a good explanation why it's wrong using the killProcess() method?. after all - it's part of the android API... what I do know is that calling this method while other threads doing in potential an important work (operations on files, writing to DB, HTTP requests, running services..) can be terminated in the middle, and it's clearly not good. also I know I can benefit from the fact that "re-open" the application will be faster, cause the system maybe still "holds" in memory state from last time been used, and killProcess() prevents that. beside this reason, in assumption I don't have such operations, and I don't care my application will load from scratch each run, there are other reasons why not using the killProcess() method? I know about finish() method to close an Activity, so don't write me about that please.. finish() is only for Activity. not to all application, and I think I know exactly why and when to use it... and another thing - I'm developing also games with the Unity3D framework, and exporting the project to android. when I decompiled the generated apk, I was very suprised to find out that the java source code created from unity - implementing Unity's - Application.quit() method, with Process.killProcess(Process.myPid()). Application.quit() is suppose to be the right way to close game according to Unity3d guides (is it really?? maybe I'm wrong, and missed something), so how it happens that the Unity's framework developers which doing a very good work as it seems implemented this in native android to killProcess()? anyway - I wish to have a "list of reasons" why not using the killProcess() method, so please write down your answer - if you have something interesting to say about that. TIA

    Read the article

  • Is it good or bad practice to use var everywhere? [closed]

    - by Earlz
    Possible Duplicate: Use of var keyword in C# Hello, I've recently been discovering the awesomeness that is the var keyword in C#. Well, I didn't think about it before but I just wrote lines of code that are along the lines of var con=CreateNewConnection(); Where this would usually be IdbConnection con=CreateNewConnection(); Is this a good use of var? Is it possible to use var too often? Are there any downsides to using it? Also, one more point of consideration: We are not worried about backwards compatability. We just care that it runs on .NET 3.5

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >