Search Results

Search found 39456 results on 1579 pages for 'why do you'.

Page 394/1579 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Logs show failed password for invalid user root from <IP Address> port 2924 ssh2

    - by Chris Hanson
    I'm getting a constant flow of these messages in my logs. The port is variable (seemingly between 1024 and 65535). I can simulate it myself by running sftp root@<my ip> I've commented out the sftp subsystem line in my sshd_config. These ports should be closed by provider's firewall. I don't understand: Why sftp would be selecting a random port like that. It seems to be behaving like FTP in passive mode, but I can't make any sense of why that would be. Why it can even hit my server in the first place if these ports are closed.

    Read the article

  • What is the deal with hard drive technology moving to 4K sectors, vs. 512 bytes? Are 4K sector disk

    - by Chris W. Rea
    I've noticed that some Western Digital hard drives are now sporting 4K sectors, that is, the sectors are larger: 4096 bytes vs. the actual de facto standard of 512 bytes. So: What's the big deal with 4K sectors? Is it marketing hype, or a real advantage? Why should somebody building a new PC care, or not, about 4K sectors? Why is this transition taking place now? Why didn't it happen sooner? Are there things to look out for when buying a 4K sector hard drive? e.g. incompatibility? Anything else we should know about 4K sectors?

    Read the article

  • Unable to click on tabs in Firefox unless I press Command-Q

    - by Philip
    Why does clicking on tabs in Firefox sometimes fail until I hit cmd-Q, and then why does that not quit but instead make clicking on the tabs work, and how do I stop that behavior? I'm running Firefox 3.5.5 on Mac OS X 10.5 and this has been happening for a while (with previous versions of FF as well). I can't forcibly reproduce the behavior, but every now and then (few days?) I just can't click on tabs or on the x's to close the tabs. I can still ctrl-tab between tabs, though. But if I press cmd-Q, instead of quitting, Firefox seems to seize for a second and then I can click on tabs and click to close them just fine. No clue why this is happening or how to stop it. And I do have tons of extensions installed, so it's plausible one of them is the problem..... Thanks.

    Read the article

  • Apache denying requests with VirtualHosts

    - by Ross
    This is the error I get in my log: Permission denied: /home/ross/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable My VirtualHost is pretty simple: <VirtualHost 127.0.0.1> ServerName jotter.localhost DocumentRoot /home/ross/www/jotter/public DirectoryIndex index.php index.html <Directory /home/ross/www/jotter/public> AllowOverride all Order allow,deny allow from all </Directory> CustomLog /home/ross/www/jotter/logs/access.log combined ErrorLog /home/ross/www/jotter/logs/error.log LogLevel warn </VirtualHost> Any ideas why this is happening? I can't see why Apache is looking for a .htaccess there and don't know why this should stop the request. Thanks.

    Read the article

  • Disable Offline Files (mobsync.exe) on Windows 7 Home

    - by Synetech
    This morning I was watching the CPU graph of a Windows 7 Home laptop and noticed that every few seconds, the CPU would spike several percent. I watched the processes and determined that it was mobsync.exe (Offline Files) that was the culprit. I tried the usual steps that Googling turns up, and clicking the Manage Offline Files link to bring up the Offline Files dialog to click Disable Synch does not work because the dialog will not display. This makes sense since everything I have read indicates that Offline Files is not even included/supported in the Home version, so I am at a loss as to why it is running at all, let alone why it is sucking up CPU cycles. (My best guess is that it was started when they pressed Win+X to access the Mobility Center.) Of course I can just kill mobsync, but it could always just come back. How/why would mobsync be running on a Home version and how can it be disabled (of course the Group Policy editor is not available on a Home version).

    Read the article

  • UTF-8 bit representation

    - by Yanick Rochon
    I'm learning about UTF-8 standards and this is what I'm learning : Definition and bytes used UTF-8 binary representation Meaning 0xxxxxxx 1 byte for 1 à 7 bits chars 110xxxxx 10xxxxxx 2 bytes for 8 à 11 bits chars 1110xxxx 10xxxxxx 10xxxxxx 3 bytes for 12 à 16 bits chars 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx 4 bytes for 17 à 21 bits chars And I'm wondering, why 2 bytes UTF-8 code is not 10xxxxxx instead, thus gaining 1 bit all the way up to 22 bits with a 4 bytes UTF-8 code? The way it is right now, 64 possible values are lost (from 1000000 to 10111111). I'm not trying to argue the standards, but I'm wondering why this is so? ** EDIT ** Even, why isn't it UTF-8 binary representation Meaning 0xxxxxxx 1 byte for 1 à 7 bits chars 110xxxxx xxxxxxxx 2 bytes for 8 à 13 bits chars 1110xxxx xxxxxxxx xxxxxxxx 3 bytes for 14 à 20 bits chars 11110xxx xxxxxxxx xxxxxxxx xxxxxxxx 4 bytes for 21 à 27 bits chars ...? Thanks!

    Read the article

  • routing weirdness - traceroute 'vanishes' en route

    - by The Journeyman geek
    I'm attempting to set up one of my boxes as a server (again), but i'm having some odd connection issues- the box itself connects fine to the internet, but trying to connect to my external ip address seems to result in the trace getting 'lost' partway. http://pastebin.com/HCQAGbvn - this is a traceroute from another system that's connected to another ISP - starhub is my own one, while i have another system that i have access to on singtel. I'm wondering if my ISP is messing around with routing, or is something very odd going on. As you note, the traceroute dosen't reach me, but if it helps, i use a dd-wrt router. edit: Facepalmishly, turning the firewall on my router on and off fixed it. I don't get why it dropped off at different ip addresses each time, or why the router set it self to block.. everything, or why it affected the ipv6 tunnel as well.

    Read the article

  • File/folder Write/Delete wise, is my server secure?

    - by acidzombie24
    I wanted to know if someone got access to my server by using a nonroot account, how much damage can he do? After i su someuser I used this command to find all files and folders that are writeable. find / -writable >> list.txt Here is the result. Its most /dev/something and /proc/something and these /var/lock /var/run/mysqld/mysqld.sock /var/tmp /var/lib/php5 Is my system secure? /var/tmp makes sense but i am unsure why this user has write access to those folders. Should i change them? stat /var/lib/php5 gives me 1733 which is odd. Why write access? why no read? is this some kind of weird use of a temp file?

    Read the article

  • How to explain DRM cannot work?

    - by jerryjvl
    I am looking for the shortest comprehensive way to explain to people that are trying to use DRM as a technology to prevent users from using their data in some fashion deemed undesirable, why their solution cannot work by definition. Ideally I'd like something that: Covers why technically it is impossible to have people access local data, but only in such-and-such a way Imparts an understanding of why this is, to avoid follow-on "But what if" rebuttals Is intuitive enough and short enough that even a politician (j/k) could grasp it When faced with this situation I try to be clear and concise, but I usually end up failing at least on one of these points. I'd really like to have a 'stock' answer that I can use in the future.

    Read the article

  • Disabled annotation tools in Skim

    - by Kit
    Here's a portion of the toolbar in Skim In the Add Note section, why are the following tools disabled (dimmed)? Add New Highlight (A in the yellow box) Add New Underline (red line under A) Add New Strike Out (A struck out by red line) However, in the Tool Mode section, there is a drop down button (shown as active in the screenshot). To illustrate, I can select and use the Add New Underline tool, as well as the other tools I mentioned above, using the drop down button. But those tools are dimmed out in the Add Note section. Why? I have observed that the drop down button is just a duplicate of the Add Note section. Why not just enable all the buttons in the Add Note section and save the user from making an extra click just to bring down a list of tools? Is this because of some property of the presently open PDF, or what?

    Read the article

  • running commands as other users - best method

    - by linuxrawkstar
    When running commands as other users from the command line, what is recommended best practice? In the past I've used sudo like so: sudo -u username command [args] I've been told (with no specific reasons why) that using sudo for this purpose is wrong. I'd like to know why. Is there some "best way" to accomplish this? For example, I've also used the su command like so: su username - -c "command [args]" I can't imagine why either of these methods would be "bad". Your thoughts?

    Read the article

  • Windows 7 spins up HDD when files should be on SSD

    - by ZorroDeLaArena
    I run 64-bit Windows 7 Ultimate off a Corsair 12GB SSD (cmfssd-128gbg2d) and I've got two older HDDs I use for storing media. However even though the operating system and all the programs are installed on the SSD, frequently when I'm starting programs, I can hear the computer spin up one of the HDDs before the app starts and I'm not sure why (and I don't know which drive it is). For programs that should open very quickly - Programmer's Notepad, Calculator (the basic one that comes with Windows), etc. - having to wait several seconds sometimes can be pretty obnoxious. Avoiding wait times like that are why I got the SSD in the first place. Is there any way to tell why that's happening? I'm happy to provide more information but I'm not sure what's important. Any help would be really appreciated!

    Read the article

  • How come Core i7 (desktop) dominates Xeon (server)?

    - by grant tailor
    I have been using this performance benchmark results to select what CPUs to use on my web server and to my surprise, looks like Core i7 CPUs dominates the list pushing Xeon CPUs into the bush. Why is this? Why is Intel making the Core i7 perform better than the Xeon. Are Desktop CPUs supposed to perform better than server grade Xeon CPUs? I really don't get this and will like to know what you think or why this is so. Also I am thinking about getting a new web server and thinking between the i7-2600 VS the Xeon E3-1245. The i7-2600 is higher up in the performance benchmark but I am thinking the Xeon E3-1245 is server grade. What do you guys think? Should I go for the i7-2600? Or is the Xeon E3-1245 a server grade CPU for a reason?

    Read the article

  • Can't connect Tomcat via JMX

    - by Icarokun
    I couldn't connect to one Tomcat server via JMX in a Linux virtual machine. There was no firewall running. All seemed ok. By searching on the web I found out that I have to use the -Djava.rmi.server.hostname property to fix it. It worked, but I don't understand why. My machine has five Tomcat servers running, all of them have JMX enabled in consecutive ports (8008, 8018, 8028...), all of them have the same configuration and only one of them had this issue connecting JMX. No firewall, no -Djava.rmi.server.hostname property in any Tomcat. I understand the problem, but I don't understand why four of my Tomcat servers worked and one of them not. Why is this?

    Read the article

  • How come i7 (desktop) dominates Xeon (server)?

    - by grant tailor
    I have been using this performance benchmark results http://www.cpubenchmark.net/high_end_cpus.html to select what CPUs to use on my web server and to my surprise...looks like i7 CPUs dominates the list pushing Xeon CPUs into the bush. Why is this? Why is Intel making the i7 perform better than the Xeon. Are Desktop CPUs supposed to perform better than server grade Xeon CPUs? I really don't get this and will like to know what you think or why this is so. Also i am thinking about getting a new web server and thinking between the i7-2600 VS the Xeon E3-1245. The i7-2600 is higher up in the performance benchmark but i am thinking the Xeon E3-1245 is server grade...so what do you guys think? Should i go for the i7-2600? Or is the Xeon E3-1245 a server grade CPU for a reason?

    Read the article

  • What are the pros of switching DNS names with a database server hardware upgrade?

    - by wilbbe01
    When we upgrade to new hardware at work we usually increment a number in the DNS name. For example. We have a server called database-2, that is slated to become database-3 in the coming days. I haven't been able to find a good reason why this is good behavior. To me the work of trying to catch all end user machines, as well as all servers dependent on the database server is far riskier than simply moving the database and ip/name with it to the new hardware. A little over a year ago we spent several months of requests coming in, as infrequent users began using software that needed to be updated to point to a new DNS name. I am struggling to find answers as to why this is a good practice. So the question. Why is using DNS names as a "server hardware version identifier" a good idea? What am I overlooking? Thanks much.

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Clang warning flags for Objective-C development

    - by Macmade
    As a C & Objective-C programmer, I'm a bit paranoid with the compiler warning flags. I usually try to find a complete list of warning flags for the compiler I use, and turn most of them on, unless I have a really good reason not to turn it on. I personally think this may actually improve coding skills, as well as potential code portability, prevent some issues, as it forces you to be aware of every little detail, potential implementation and architecture issues, and so on... It's also in my opinion a good every day learning tool, even if you're an experienced programmer. For the subjective part of this question, I'm interested in hearing other developers (mainly C, Objective-C and C++) about this topic. Do you actually care about stuff like pedantic warnings, etc? And if yes or no, why? Now about Objective-C, I recently completely switched to the LLVM toolchain (with Clang), instead of GCC. On my production code, I usually set this warning flags (explicitly, even if some of them may be covered by -Wall): -Wall -Wbad-function-cast -Wcast-align -Wconversion -Wdeclaration-after-statement -Wdeprecated-implementations -Wextra -Wfloat-equal -Wformat=2 -Wformat-nonliteral -Wfour-char-constants -Wimplicit-atomic-properties -Wmissing-braces -Wmissing-declarations -Wmissing-field-initializers -Wmissing-format-attribute -Wmissing-noreturn -Wmissing-prototypes -Wnested-externs -Wnewline-eof -Wold-style-definition -Woverlength-strings -Wparentheses -Wpointer-arith -Wredundant-decls -Wreturn-type -Wsequence-point -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wsign-conversion -Wstrict-prototypes -Wstrict-selector-match -Wswitch -Wswitch-default -Wswitch-enum -Wundeclared-selector -Wuninitialized -Wunknown-pragmas -Wunreachable-code -Wunused-function -Wunused-label -Wunused-parameter -Wunused-value -Wunused-variable -Wwrite-strings I'm interested in hearing what other developers have to say about this. For instance, do you think I missed a particular flag for Clang (Objective-C), and why? Or do you think a particular flag is not useful (or not wanted at all), and why?

    Read the article

  • Oracle Linux Forum

    - by rickramsey
    This forum includes live chat so you can tell Wim, Lenz, and the gang what you really think. Linux Forum - Tuesday March 27 Since Oracle recently made Release 2 of its Unbreakable Enterprise Kernel available (see Lenz's blog), we're following up with an online forum with Oracle's Linux executives and engineers. Topics will be: 9:30 - 9:45 am PT Oracle's Linux Strategy Edward Screven, Oracle's Chief Corporate Architect and Wim Coekaerts, Senior VP of Linux and Virtualization Engineering, will explain Oracle's Linux strategy, the benefits of Oracle Linux, Oracle's role in the Linux community, and the Oracle Linux roadmap. 9:45 - 10:00 am PT Why Progressive Insurance Chose Oracle Linux John Dome, Lead Systems Engineer at Progressive Insurance, outlines why they selected Oracle Linux with the Unbreakable Enterprise Kernel to reduce cost and increase the performance of database applications. 10:00 - 11:00 am PT What's New in Oracle Linux Oracle engineers walk you through new features in Oracle Linux, including zero-downtime updates with Ksplice, Btrfs and OCFS2, DTrace for Linux, Linux Containers, vSwitch and T-Mem. 11:00 am - 12:00 pm PT Get More Value from your Linux Vendor Why Oracle Linux delivers more value than Red Hat Enterprise Linux, including better support at lower cost, best practices for deployments, extreme performance for cloud deployments and engineered systems, and more. Date: Tuesday, March 27, 2012 Time: 9:30 AM PT / 12:30 PM ET Duration: 2.5 hours Register here. - Rick

    Read the article

  • Cryptographic Validation Explained

    - by MarkPearl
    We have been using LogicNP’s CryptoLicensing for some of our software and I was battling to understand how exactly the whole process worked. I was sent the following document which really helped explain it – so if you ever use the same tool it is well worth a read. Licensing Basics LogicNP CryptoLicensing For .Net is the most advanced and state-of-the art licensing and copy protection system you can use for your software. LogicNP CryptoLicensing System uses the latest cryptographic technology to generate and validate licenses. The cryptographic algorithm used is the RSA algorithm which consists of a pair of keys called as the generation key and the validation key. Data encrypted using the generation key can only be decrypted using the corresponding validation key. How does cryptographic validation work? When a new license project is created, a unique validation-generation key pair is created for the project. When LogicNP CryptoLicensing For .Net generates licenses, it encrypts the license settings using the generation key. The validation key can be safely distributed with your software and is used during validation. During license validation, LogicNP CryptoLicensing For .Net attempts to decrypt the encrypted license code using the validation key. If the decryption is successful, this means that the data was encrypted using the generation key, since only the corresponding validation key can decrypt data encrypted with the generation key. This further means that not only is the license valid but that it was generated by you and only you since nobody else has access to the generation key. Generation Key This key is used by CryptoLicensing Generator to generate encrypted license codes. This key is stored in the license project file, so the license project file must be kept secure and confidential and must be accorded the same care as any other critical asset such as source code. Validation Key This key is used for validating generated license codes. It is the same key displayed in the 'Get Validation Key And Code' dialog (Ctrl+K) and is used by your software when validating license codes (using LogicNP.CryptoLicensing.dll). Unlike the generation key, it is not necessary to keep this key secure and confidential. Note that the generation key pair is stored in the project file created by LogicNP CryptoLicensing For .Net, so it is very important to backup this file and to keep it secure. Once the file is lost, it is not possible to retrieve the key pair. FAQ Do I use the same validation key to validate all license codes? Yes, the validation key (and generation key) for the project remains the same; you use the same key to validate all license codes generated using the project. You can retrieve the validation key using the "Project" menu --> "Get Validation Key & Code" menu item. Can license codes generated using generation key from one project be validated using validation key of another project? No! Q. Is every generated license code unique? A. Yes, every license code generated by CryptoLicensing is guaranteed to be unique, even if you generate thousands of codes at a time. Q. What makes CryptoLicensing so secure? A. CryptoLicensing uses the latest cryptographic technology to generate and validate licenses. The cryptographic algorithm used is the RSA asymmetric key algorithm which can use upto 3072-bit keys. Given current computing power, it takes years to break a 3072-bit key. Q. Is is possible for a hacker to develop a keygen for my software? A. Impossible. The cryptographic algorithm used by CryptoLicensing consists of a pair of keys called as the generation key and the validation key. Data encrypted with one key can only be decrypted by the other key and vice versa. Licenses are generated using the generation key and validated using the validation key. Without the generation key, it is impossible to generate valid licenses. Q. What is the difference between validation key and generation key? Generation Key This key is used by CryptoLicensing Generator to generate encrypted license codes. This key is stored in the license project file, so the license project file must be kept secure and confidential and must be accorded the same care as any other critical asset such as source code. Validation Key This key is used for validating generated license codes. It is the same key displayed in the 'Get Validation Key And Code' dialog (Ctrl+K) and is used by your software when validating license codes (using LogicNP.CryptoLicensing.dll). Unlike the generation key, it is not necessary to keep this key secure and confidential. Q. Do I have to include the license project file (.licproj) with my software? A. No!!! This goes against the very essence of the security of the asymmetric cryptographic scheme because the project file contains both the validation and generation key. With your software, you only need to include the validation key which will be used to validate licenses generated by CryptoLicensing using the generation key. The license project file should be treated as any other valuable and confidential asset such as your source code. Q. Does the license service need the license project file? A. Yes. The license project file is needed whenever new licenses are generated (via the UI, via the API or via the license service). As just one example, the license service generates new machine-locked licenses when activated licenses are presented to it for activation, therefore the license service needs the license project file. Q. Is it possible to embed my own data in the generated licenses? A. Yes. You can embed any amount of additional data in the licenses. This data will have the same amount of security as the license code itself and will be tamper-proof. The embedded user data can be retrieved from your software. Q. What additional steps can I take to ensure that my software does not get cracked? A. There are many methods and techniques which can make it extremely difficult for a hacker to crack your software. See Writing Effective License Checking Code And Designing Effective Licenses for more information. Q. Why is the license service not working? A. The most common cause is not setting the CryptoLicense.LicenseServiceURL property before trying to validate a license. Make sure that this property is set to the correct URL where your license service is hosted. The most common cause after this is that the license project file on the web server where your license service is hosted is not the latest. This happens if you make changes to the license project (for example, set the 'Enable With Serials' setting for a profile), but don't upload the updated project file to your web server. Q. Why are my serials not working? Serial codes require the user of a license service. See Using Serial Codes for more details. Also see the earlier question 'Why is the license service not working?' Q. Is the same validation key used to validate license codes generated from different profiles. A. Yes. Profiles are just pre specified license settings for quickly generating licenses having those settings. The actual license code is still generated using the license project's cryptographic generation key and thus, can be validated using the project's validation key. Q. Why are changes made to a profile not getting saved? A. Simply changing license settings via UI and saving the license project does not save those license settings to the active profile. You must first save the license settings to a profile using the Save/Save As command from the Profiles menu (see above). Q. Why is validation of activated licenses failing from CryptoLicensing Generator, but works from my software? A. Make sure that you have specified the URL of the license service using the Project Properties Dialog. Also see the earlier question 'Why is the license service not working?' Q. How can I extend the trial period of my customer? A. To extend the evaluation period of the customer, simply send him a new license code specifying the desired evaluation limits. Evaluation information such as the current used days, executions, etc are stored in garbled form in a registry location which is derived from the license code. Therefore, when a new license code is used, the old evaluation information will not be used and a new evaluation period will be started.

    Read the article

  • What is the economic rationale behind programmers who work on a open source project (free) instead of a commercial project (not free)?

    - by Kim Jong Woo
    I can't understand why some people dedicate so much hour into a completely open source project without closing it and yielding greater profit from it. I don't think profiting from your code is evil, I think it's a great motivator. Why do some people feel that commercial software and generating money from it is bad? There seems to be this black and white thinking that open source = good, commercial = bad. I hardly find this convincing, and often commercial companies which are supported by sales produce very good results. An open source software in the same niche can't compete against the corporation. Of course, sometimes this is completely the other way around where private companies produce inferior product compared to open source counterparts. So help me understand, why do programmers open source their code when there is commercial prospects for it? Shouldn't the rational programmer or human being make every effort to capitalize on their opportunity cost? Working on a open source project for months when you could've spent the same number of hours at commidity wage or some other monetary compensation?

    Read the article

  • Sucky MSTest and the "WaitAll for multiple handles on a STA thread is not supported" Error

    - by Anne Bougie
    If you are doing any multi-threading and are using MSTest, you will probably run across this error. For some reason, MSTest by default runs in STA threading mode. WTF, Microsoft! Why so stuck in the old COM world?  When I run the same test using NUnit, I don't have this problem. Unfortunately, my company has chosen MSTest, so I have a lot of testing problems. NUnit is so much better, IMO. After determining that I wasn't referencing any unmanaged code that would flip the thread into STA, which can also cause this error, the only thing left was the testing suite I was using. I dug around a little and found this obscure setting for the Test Run Config settings file that you can't set using its interface. You have to open it up as a text file and add the following setting:  <ExecutionThread apartmentState="MTA" /> This didn't break any other tests, so I'm not sure why it's not the default, or why there is nothing in the test run configuration app to change this setting. Here is the code I was testing:  public void ProcessTest(ProcessInfo[] infos) {    WaitHandle[] waits = new WaitHandle[infos.Length];    int i = 0;    foreach (ProcessInfo info in infos)    {       AutoResetEvent are = new AutoResetEvent(false);       info.Are = are;       waits[i++] = are;         Processor pr = new Processor();       WaitCallback callback = pr.ProcessTest;       ThreadPool.QueueUserWorkItem(callback, info);    }      WaitHandle.WaitAll(waits); }

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • Arguments to homologate Firefox in a Company

    - by Vegetus
    I developed a web project for my company and this project was designed to use Mozilla Firefox (including the javascript (jQuery)). However, now the company wants the project to be transferred to Internet Explorer. I know that in Google, there are several explanations about Mozilla Firefox, which I can demonstrate to the company. But is there any link showing that Internet Explorer runs the W3C standards and has several justifications for why using Mozilla Firefox? I searched on youtube and slideshare, but both have a very weak argument for me to select them and show to the company. The company where I work is still very naive to keep Internet Explorer. 1) The project is intranet. Only 400 internal employees can access the web. 2) The company argues that Mozilla Firefox is not approved by the company. Any suggestions? Any link which shows that the developers of the world hate Internet Explorer? A link explaining why developers do not like Internet Explorer? After the answers, I'm thinking of making a great slide with all the necessary arguments to the company homologue firefox. And yet, published in slideshare. EDIT: Someone here must be wondering why I have not designed, also for Internet Explorer. Welllll... As the deadline for project completion is always short, I developed the project focused only on Mozilla Firefox, because the browser Mozilla Firefox most respects W3C standards (and javascript too) than Internet Explorer.

    Read the article

  • How to Avoid Your Next 12-Month Science Project

    - by constant
    While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack. After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades? On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea. Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company. Engineering Systems is Hard Work! The backbone of Exalogic is its InfiniBand network: 4 times better bandwidth than even 10 Gigabit Ethernet, and only about a tenth of its latency. What a potential for increased scalability and throughput across the middleware and database layers! But InfiniBand is a beast that needs to be tamed: It is true that Exalogic uses a standard, open-source Open Fabrics Enterprise Distribution (OFED) InfiniBand driver stack. Unfortunately, this software has been developed by the HPC community with fastest speed in mind (which is good) but, despite the name, not many other enterprise-class requirements are included (which is less good). Here are some of the improvements that Oracle's InfiniBand development team had to add to the OFED stack to make it enterprise-ready, simply because typical HPC users didn't have the need to implement them: More than 100 bug fixes in the pieces that were not related to the Message Passing Interface Protocol (MPI), which is the protocol that HPC users use most of the time, but which is less useful in the enterprise. Performance optimizations and tuning across the whole IB stack: From Switches, Host Channel Adapters (HCAs) and drivers to low-level protocols, middleware and applications. Yes, even the standard HPC IB stack could be improved in terms of performance. Ethernet over IB (EoIB): Exalogic uses InfiniBand internally to reach high performance, but it needs to play nicely with datacenters around it. That's why Oracle added Ethernet over InfiniBand technology to it that allows for creating many virtual 10GBE adapters inside Exalogic's nodes that are aggregated and connected to Exalogic's IB gateway switches. While this is an open standard, it's up to the vendor to implement it. In this case, Oracle integrated the EoIB stack with Oracle's own IB to 10GBE gateway switches, and made it fully virtualized from the beginning. This means that Exalogic customers can completely rewire their server infrastructure inside the rack without having to physically pull or plug a single cable - a must-have for every cloud deployment. Anybody who wants to match this level of integration would need to add an InfiniBand switch development team to their project. Or just buy Oracle's gateway switches, which are conveniently shipped with a whole server infrastructure attached! IPv6 support for InfiniBand's Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), TCP/IP over IB (IPoIB) and EoIB protocols. Because no IPv6 = not very enterprise-class. HA capability for SDP. High Availability is not a big requirement for HPC, but for enterprise-class application servers it is. Every node in Exalogic's InfiniBand network is connected twice for redundancy. If any cable or port or HCA fails, there's always a replacement link ready to take over. This requires extra magic at the protocol level to work. So in addition to Weblogic's failover capabilities, Oracle implemented IB automatic path migration at the SDP level to avoid unnecessary failover operations at the middleware level. Security, for example spoof-protection. Another feature that is less important for traditional users of InfiniBand, but very important for enterprise customers. InfiniBand Partitioning and Quality-of-Service (QoS): One of the first questions we get from customers about Exalogic is: “How can we implement multi-tenancy?” The answer is to partition your IB network, which effectively creates many networks that work independently and that are protected at the lowest networking layer possible. In addition to that, QoS allows administrators to prioritize traffic flow in multi-tenancy environments so they can keep their service levels where it matters most. Resilient IB Fabric Management: InfiniBand is a self-managing network, so a lot of the magic lies in coming up with the right topology and in teaching the subnet manager how to properly discover and manage the network. Oracle's Infiniband switches come with pre-integrated, highly available fabric management with seamless integration into Oracle Enterprise Manager Ops Center. In short: Oracle elevated the OFED InfiniBand stack into an enterprise-class networking infrastructure. Many years and multiple teams of manpower went into the above improvements - this is something you can only get from Oracle, because no other InfiniBand vendor can give you these features across the whole stack! Exabus: Because it's not About the Size of Your Network, it's How You Use it! So let's assume that you somehow were able to get your hands on an enterprise-class IB driver stack. Or maybe you don't care and are just happy with the standard OFED one? Anyway, the next step is to actually leverage that InfiniBand performance. Here are the choices: Use traditional TCP/IP on top of the InfiniBand stack, Develop your own integration between your middleware and the lower-level (but faster) InfiniBand protocols. While more bandwidth is always a good thing, it's actually the low latency that enables superior performance for your applications when running on any networking infrastructure: The lower the latency, the faster the response travels through the network and the more transactions you can close per second. The reason why InfiniBand is such a low latency technology is that it gets rid of most if not all of your traditional networking protocol stack: Data is literally beamed from one region of RAM in one server into another region of RAM in another server with no kernel/drivers/UDP/TCP or other networking stack overhead involved! Which makes option 1 a no-go: Adding TCP/IP on top of InfiniBand is like adding training wheels to your racing bike. It may be ok in the beginning and for development, but it's not quite the performance IB was meant to deliver. Which only leaves option 2: Integrating your middleware with fast, low-level InfiniBand protocols. And this is what Exalogic's "Exabus" technology is all about. Here are a few Exabus features that help applications leverage the performance of InfiniBand in Exalogic: RDMA and SDP integration at the JDBC driver level (SDP), for Oracle Weblogic (SDP), Oracle Coherence (RDMA), Oracle Tuxedo (RDMA) and the new Oracle Traffic Director (RDMA) on Exalogic. Using these protocols, middleware can communicate a lot faster with each other and the Oracle database than by using standard networking protocols, Seamless Integration of Ethernet over InfiniBand from Exalogic's Gateway switches into the OS, Oracle Weblogic optimizations for handling massive amounts of parallel transactions. Because if you have an 8-lane Autobahn, you also need to improve your ramps so you can feed it with many cars in parallel. Integration of Weblogic with Oracle Exadata for faster performance, optimized session management and failover. As you see, “Exabus” is Oracle's word for describing all the InfiniBand enhancements Oracle put into Exalogic: OFED stack enhancements, protocols for faster IB access, and InfiniBand support and optimizations at the virtualization and middleware level. All working together to deliver the full potential of InfiniBand performance. Who else has 100% control over their middleware so they can develop their own low-level protocol integration with InfiniBand? Even if you take an open source approach, you're looking at years of development work to create, test and support a whole new networking technology in your middleware! The Extras: Less Hassle, More Productivity, Faster Time to Market And then there are the other advantages of Engineered Systems that are true for Exalogic the same as they are for every other Engineered System: One simple purchasing process: No headaches due to endless RFPs and no “Will X work with Y?” uncertainties. Everything has been engineered together: All kinds of bugs and problems have been already fixed at the design level that would have only manifested themselves after you have built the system from scratch. Everything is built, tested and integrated at the factory level . Less integration pain for you, faster time to market. Every Exalogic machine world-wide is identical to Oracle's own machines in the lab: Instant replication of any problems you may encounter, faster time to resolution. Simplified patching, management and operations. One throat to choke: Imagine finger-pointing hell for systems that have been put together using several different vendors. Oracle's Engineered Systems have a single phone number that customers can call to get their problems solved. For more business-centric values, read The Business Value of Engineered Systems. Conclusion: Buy Exalogic, or get ready for a 6-12 Month Science Project And here's the reason why it's not easy to "build your own Exalogic": There's a lot of work required to make such a system fly. In fact, anybody who is starting to "just put together a bunch of servers and an InfiniBand network" is really looking at a 6-12 month science project. And the outcome is likely to not be very enterprise-class. And it won't have Exalogic's performance either. Because building an Engineered System is literally rocket science: It takes a lot of time, effort, resources and many iterations of design/test/analyze/fix to build such a system. That's why InfiniBand has been reserved for HPC scientists for such a long time. And only Oracle can bring the power of InfiniBand in an enterprise-class, ready-to use, pre-integrated version to customers, without the develop/integrate/support pain. For more details, check the new Exalogic overview white paper which was updated only recently. P.S.: Thanks to my colleagues Ola, Paul, Don and Andy for helping me put together this article! var flattr_uid = '26528'; var flattr_tle = 'How to Avoid Your Next 12-Month Science Project'; var flattr_dsc = 'While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack.After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades?On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea.Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company.'; var flattr_tag = 'Engineered Systems,Engineered Systems,Infiniband,Integration,latency,Oracle,performance'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2012/04/how-avoid-your-next-12-month-science-project'; var flattr_lng = 'en_GB'

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >