Search Results

Search found 1424 results on 57 pages for 'protect'.

Page 50/57 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

    - by Scott McNeil
    Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption News Summary IT organizations are adopting private clouds as a stepping-stone to business-driven, self-service IT. Successful implementations hinge on the ability to efficiently deploy and manage cloud services at enterprise scale. Having a complete cloud management solution integrated with an enterprise-class technology stack is a fundamental requirement for IT. Oracle Enterprise Manager 12c Release 4 meets that requirement by helping businesses become more agile and responsive, while reducing cost, complexity, and risk. News Facts Oracle Enterprise Manager 12c Release 4, available today, lets organizations rapidly adopt Oracle-based, enterprise-scale private clouds. New capabilities provide advanced technology stack management, secure database administration, and enterprise service governance, enabling Oracle customers and partners to maximize database and application performance and drive innovation using self-service IT platforms. The enhancements have been driven by customers and the growing Oracle Enterprise Manager Ecosystem, comprised of more than 750 Oracle PartnerNetwork (OPN) Specialized partners. Oracle and its partners and customers have built over 140 plug-ins and connectors for Oracle Enterprise Manager. Watch the video highlights. Automation for Broader Cloud Services Oracle Enterprise Manager 12c Release 4 allows for a rapid enterprise-wide adoption of database, middleware and infrastructure services in the private cloud, driven by an enhanced API-enabled service catalog. The release features “push button” style provisioning of complete environments such as SOA and Oracle Active Data Guard, and fast data cloning that enables rapid deployment and testing of enterprise applications. Out-of-the-box capabilities to detect data and configuration vulnerabilities provide enhanced cloud service governance along with greater operational control through a flexible and extensible showback mechanism. Enhanced Database Management A new performance warehouse enables predictive database diagnostics and trend analysis and helps identify database problems before they occur. New enterprise data-governance capabilities enhance security by helping systematically discover and protect sensitive data. Step-by-step orchestration of upgrades with the ability to rollback changes enables faster adoption of Oracle Database 12c. Expanded Fusion Middleware Management A new consolidated view of Oracle Fusion Middleware 12c deployments with a guided management capability lets administrators apply best management practices to diverse middleware environments and identify performance issues quickly. A Java VM Diagnostics as a Service feature allows governed access to diagnostics data for IT workers across multiple disciplines for accelerated DevOps resolutions of defects and performance optimization. New automated provisioning for SOA lets middleware administrators perform mass SOA provisioning with ease. Superior Enterprise-Grade Management Private roles and preferred credentials have been added to Oracle Enterprise Manager to provide additional fine-grained security for organizations with complex access control requirements. A new security console provides a single point of control for managing the security of Oracle Enterprise Manager environments. Support for the latest industry standard SNMP v3 protocol, including encryption, enables more secure heterogeneous management. “Smart monitoring” adapts to observed environmental changes and adds self-management capabilities to help Oracle Enterprise Manager run at peak performance, while demanding less IT supervision. Supporting Quotes “Lawrence Livermore National Laboratory has a strong tradition of technology breakthroughs and leadership. As a member of Oracle’s Customer Advisory Board for Oracle Enterprise Manager, we have consistently provided feedback and guidance in the areas of enterprise-scale cloud, self-diagnosability, and secure administration for the product,” said Tim Frazier, CIO, NIF and Photon Sciences, Lawrence Livermore National Laboratory. “We intend to take advantage of the Release 4 features that support enterprise-scale availability and fine-grained security capabilities for private cloud deployments.” “IDC's most recent CloudTrack survey shows that most enterprises plan to adopt hybrid cloud architectures over the next three years,” said Mary Johnston Turner, Research Vice President, Enterprise System Management Software, IDC. “These organizations plan to deploy a wide range of workloads into cloud environments including mission critical database and middleware services that require high levels of fault tolerance and disaster recovery. Such capabilities were traditionally custom configured for each application but cloud offers the possibility to incorporate such properties within the service definition, enabling organizations to adopt cloud without compromise. With the latest release of Oracle Enterprise Manager 12c, Oracle is providing customers with an out-of-the-box experience for delivering highly-resilient cloud services for databases and applications.” “Since its inception, Oracle has been leading the way in innovative, scalable and high performance solutions for the enterprise. With this release of Oracle Enterprise Manager, we are extending this leadership by providing enterprise-scale capabilities for planning, delivering, and managing private clouds. We call this ‘zero-to-cloud – accelerated.’ These enhancements help our customers to expedite their adoption of cloud computing and prepares them for the next generation of self-service IT,” said Prakash Ramamurthy, senior vice president of Systems and Cloud Management at Oracle. Supporting Resources Oracle Enterprise Manager 12c Video: Cerner Delivers High Performance Private Cloud Video: BIAS Achieves Outstanding Results with Private Cloud Press Release Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • ct.sym steals the ASM class

    - by Geertjan
    Some mild consternation on the Twittersphere yesterday. Marcus Lagergren not being able to find the ASM classes in JDK 8 in NetBeans IDE: And there's no such problem in Eclipse (and apparently in IntelliJ IDEA). Help, does NetBeans (despite being incredibly awesome) suck, after all? The truth of the matter is that there's something called "ct.sym" in the JDK. When javac is compiling code, it doesn't link against rt.jar. Instead, it uses a special symbol file lib/ct.sym with class stubs. Internal JDK classes are not put in that symbol file, since those are internal classes. You shouldn't want to use them, at all. However, what if you're Marcus Lagergren who DOES need these classes? I.e., he's working on the internal JDK classes and hence needs to have access to them. Fair enough that the general Java population can't access those classes, since they're internal implementation classes that could be changed anytime and one wouldn't want all unknown clients of those classes to start breaking once changes are made to the implementation, i.e., this is the rt.jar's internal class protection mechanism. But, again, we're now Marcus Lagergen and not the general Java population. For the solution, read Jan Lahoda, NetBeans Java Editor guru, here: https://netbeans.org/bugzilla/show_bug.cgi?id=186120 In particular, take note of this: AFAIK, the ct.sym is new in JDK6. It contains stubs for all classes that existed in JDK5 (for compatibility with existing programs that would use private JDK classes), but does not contain implementation classes that were introduced in JDK6 (only API classes). This is to prevent application developers to accidentally use JDK's private classes (as such applications would be unportable and may not run on future versions of JDK). Note that this is not really a NB thing - this is the behavior of javac from the JDK. I do not know about any way to disable this except deleting ct.sym or the option mentioned above. Regarding loading the classes: JVM uses two classpath's: classpath and bootclasspath. rt.jar is on the bootclasspath and has precedence over anything on the "custom" classpath, which is used by the application. The usual way to override classes on bootclasspath is to start the JVM with "-Xbootclasspath/p:" option, which prepends the given jars (and presumably also directories) to bootclasspath. Hence, let's take the first option, the simpler one, and simply delete the "ct.sym" file. Again, only because we need to work with those internal classes as developers of the JDK, not because we want to hack our way around "ct.sym", which would mean you'd not have portable code at the end of the day. Go to the JDK 8 lib folder and you'll find the file: Delete it. Start NetBeans IDE again, either on JDK 7 or JDK 8, doesn't make a difference for these purposes, create a new Java application (or use an existing one), make sure you have set the JDK above as the JDK of the application, and hey presto: The above obviously assumes you have a build of JDK 8 that actually includes the ASM package. And below you can see that not only are the classes found but my build succeeded, even though I'm using internal JDK classes. The yellow markings in the sidebar mean that the classes are imported but not used in the code, where normally, if I hadn't removed "ct.sym", I would have seen red error marking instead, and the code wouldn't have compiled. Note: I've tried setting "-XDignore.symbol.file" in "netbeans.conf" and in other places, but so far haven't got that to work. Simply deleting the "ct.sym" file (or back it up somewhere and put it back when needed) is quite clearly the most straightforward solution. Ultimately, if you want to be able to use those internal classes while still having portable code, do you know what you need to do? You need to create a JDK bug report stating that you need an internal class to be added to "ct.sym". Probably you'll get a motivation back stating WHY that internal class isn't supposed to be used externally. There must be a reason why those classes aren't available for external usage, otherwise they would have been added to "ct.sym". So, now the only remaining question is why the Eclipse compiler doesn't hide the internal JDK classes. Apparently the Eclipse compiler ignores the "ct.sym" file. In other words, at the end of the day, far from being a bug in NetBeans... we have now found a (pretty enormous, I reckon) bug in Eclipse. The Eclipse compiler does not protect you from using internal JDK classes and the code that you create in Eclipse may not work with future releases of the JDK, since the JDK team is simply going to be changing those classes that are not found in the "ct.sym" file while assuming (correctly, thanks to the presence of "ct.sym" mechanism) that no code in the world, other than JDK code, is tied to those classes.

    Read the article

  • Online Password Security Tactics

    - by BuckWoody
    Recently two more large databases were attacked and compromised, one at the popular Gawker Media sites and the other at McDonald’s. Every time this kind of thing happens (which is FAR too often) it should remind the technical professional to ensure that they secure their systems correctly. If you write software that stores passwords, it should be heavily encrypted, and not human-readable in any storage. I advocate a different store for the login and password, so that if one is compromised, the other is not. I also advocate that you set a bit flag when a user changes their password, and send out a reminder to change passwords if that bit isn’t changed every three or six months.    But this post is about the *other* side – what to do to secure your own passwords, especially those you use online, either in a cloud service or at a provider. While you’re not in control of these breaches, there are some things you can do to help protect yourself. Most of these are obvious, but they contain a few little twists that make the process easier.   Use Complex Passwords This is easily stated, and probably one of the most un-heeded piece of advice. There are three main concepts here: ·         Don’t use a dictionary-based word ·         Use mixed case ·         Use punctuation, special characters and so on   So this: password Isn’t nearly as safe as this: P@ssw03d   Of course, this only helps if the site that stores your password encrypts it. Gawker does, so theoretically if you had the second password you’re in better shape, at least, than the first. Dictionary words are quickly broken, regardless of the encryption, so the more unusual characters you use, and the farther away from the dictionary words you get, the better.   Of course, this doesn’t help, not even a little, if the site stores the passwords in clear text, or the key to their encryption is broken. In that case…   Use a Different Password at Every Site What? I have hundreds of sites! Are you kidding me? Nope – I’m not. If you use the same password at every site, when a site gets attacked, the attacker will store your name and password value for attacks at other sites. So the only safe thing to do is to use different names or passwords (or both) at each site. Of course, most sites use your e-mail as a username, so you’re kind of hosed there. So even though you have hundreds of sites you visit, you need to have at least a different password at each site.   But it’s easier than you think – if you use an algorithm.   What I’m describing is to pick a “root” password, and then modify that based on the site or purpose. That way, if the site is compromised, you can still use that root password for the other sites.   Let’s take that second password: P@ssw03d   And now you can append, prepend or intersperse that password with other characters to make it unique to the site. That way you can easily remember the root password, but make it unique to the site. For instance, perhaps you read a lot of information on Gawker – how about these:   P@ssw03dRead ReadP@ssw03d PR@esasdw03d   If you have lots of sites, tracking even this can be difficult, so I recommend you use password software such as Password Safe or some other tool to have a secure database of your passwords at each site. DO NOT store this on the web. DO NOT use an Office document (Microsoft or otherwise) that is “encrypted” – the encryption office automation packages use is very trivial, and easily broken. A quick web search for tools to do that should show you how bad a choice this is.   Change Your Password on a Schedule I know. It’s a real pain. And it doesn’t seem worth it…until your account gets hacked. A quick note here – whenever a site gets hacked (and I find out about it) I change the password at that site immediately (or quit doing business with them) and then change the root password on every site, as quickly as I can.   If you follow the tip above, it’s not as hard. Just add another number, year, month, day, something like that into the mix. It’s not unlike making a Primary Key in an RDBMS.   P@ssw03dRead10242010   Change the site, and then update your password database. I do this about once a month, on the first or last day, during staff meetings. (J)   If you have other tips, post them here. We can all learn from each other on this.

    Read the article

  • Amanda Todd&ndash;What Parents Can Learn From Her Story

    - by D'Arcy Lussier
    Amanda Todd was a bullied teenager who committed suicide this week. Her story has become headline news due in part to her You Tube video she posted telling her story:   The story is heartbreaking for so many reasons, but I wanted to talk about what we as parents can learn from this. Being the dad to two girls, one that’s 10, I’m very aware of the dangers that the internet holds. When I saw her story, one thing jumped out at me – unmonitored internet access at an early age. My daughter (then 9) came home from a friends place once and asked if she could be in a YouTube video with her friend. Apparently this friend was allowed to do whatever she wanted on the internet, including posting goofy videos. This set off warning bells and we ensured our daughter realized the dangers and that she was not to ever post videos of herself. In looking at Amanda’s story, the access to unmonitored internet time along with just being a young girl and being flattered by an online predator were the key events that ultimately led to her suicide. Yes, the reaction of her classmates and “friends” was horrible as well, I’m not diluting that. But our youth don’t fully understand yet that what they do on the internet today will follow them potentially forever. And the people they meet online aren’t necessarily who they claim to be. So what can we as parents learn from Amanda’s story? Parents Shouldn’t Feel Bad About Being Internet Police Our job as parents is in part to protect our kids and keep them safe, even if they don’t like our measures. This includes monitoring, supervising, and restricting their internet activities. In our house we have a family computer in the living room that the kids can watch videos and surf the web. It’s in plain view of everyone, so you can’t hide what you’re looking at. If our daughter goes to a friend’s place, we ask about what they did and what they played. If the computer comes up, we ask about what they did on it. Luckily our daughter is very up front and honest in telling us things, so we have very open discussions. Parents Need to Be Honest About the Dangers of the Internet I’m sure every generation says that “kids grow up so fast these days”, but in our case the internet really does push our kids to be exposed to things they otherwise wouldn’t experience. One wrong word in a Google search, a click of a link in a spam email, or just general curiosity can expose a child to things they aren’t ready for or should never be exposed to (and I’m not just talking about adult material – have you seen some of the graphic pictures from war zones posted on news sites recently?). Our stance as parents has been to be open about discussing the dangers with our kids before they encounter any content – be proactive instead of reactionary. Part of this is alerting them to the monsters that lurk on the internet as well. As kids explore the world wide web, they’re eventually going to encounter some chat room or some Facebook friend invite or other personal connection with someone. More than ever kids need to be educated on the dangers of engaging with people online and sharing personal information. You can think of it as an evolved discussion that our parents had with us about using the phone: “Don’t say ‘I’m home alone’, don’t say when mom or dad get home, don’t tell them any information, etc.” Parents Need to Talk Self Worth at Home Katie makes the point better than I ever could (one bad word towards the end): Our children need to understand their value beyond what the latest issue of TigerBeat says, or the media who continues flaunting physical attributes over intelligence and character, or a society that puts focus on status and wealth. They also have to realize that just because someone pays you a compliment, that doesn’t mean you should ignore personal boundaries and limits. What does this have to do with the internet? Well, in days past if you wanted to be social you had to go out somewhere. Now you can video chat with any number of people from the comfort of wherever your laptop happens to be – and not just text but full HD video with sound! While innocent children head online in the hopes of meeting cool people, predators with bad intentions are heading online too. As much as we try to monitor their online activity and be honest about the dangers of the internet, the human side of our kids isn’t something we can control. But we can try to influence them to see themselves as not needing to search out the acceptance of complete strangers online. Way easier said than done, but ensuring self-worth is something discussed, encouraged, and celebrated is a step in the right direction. Parental Wake Up Call This post is not a critique of Amanda’s parents. The reality is that cyber bullying/abuse is happening every day, and there are millions of parents that have no clue its happening to their children. Amanda’s story is a wake up call that our children’s online activities may be putting them in danger. My heart goes out to the parents of this girl. As a father of daughters, I can’t imagine what I would do if I found my daughter having to hide in a ditch to avoid a mob or call 911 to report my daughter had attempted suicide by drinking bleach or deal with a child turning to drugs/alcohol/cutting to cope. It would be horrendous if we as parents didn’t re-evaluate our family internet policies in light of this event. And in the end, Amanda’s video was meant to bring attention to her plight and encourage others going through the same thing. We may not be kids, but we can still honour her memory by helping safeguard our children.

    Read the article

  • What's New In 11.1.2.1 (Talleyrand SP1)

    - by russ.bishop
    This release is primarily about bug fixes and that's what we spent the most time on, but we also addressed a number of other things: 1. Performance improvements We've done a lot of work to improve the performance of page load and execution times. For example, the View Compare page is about half the size it was previously! We've also done a lot of work on the server to improve performance of queries, exports, action scripts, etc. We implemented some finer-grained locking so fewer operations will block other users while they are in progress. We made some optimizations to improve performance when you have a lot of network or database latency as well. Just a few examples: An Import that previously took 8 GB of memory and hours to complete now runs in about 30 minutes and never takes more than 1 GB of RAM. Searching by exact Node Name now completes within 2 seconds even for a hierarchy with millions of nodes. Another search that was taking 30 seconds to run now completes in less than 5 seconds. 2. Upgrade support This release supports automatic upgrade from previous releases, built right into the console. 3. Console Improvements The Console has been reorganized and made easier to use. It is also much more multi-threaded so it responds quicker without freezing up when you save changes or when it needs to get status. 4. Property Namespaces Properties now have a concept called a Namespace. This is tied into the Application Templates to prevent conflicts with duplicate property names. Right now, if you have an AccountType and you pull in the HFM template, it also has AccountType so you end up creating properties with decorations on the name like "Account Type (HFM)". This is no longer necessary. In addition, properties within a namespace must have unique labels but they can be duplicated across namespaces. So in the Property Grid when you click on the HFM category, you just see "AccountType". When you click on MyCategory, you see "AccountType", but they are different properties with different values. Within formulas, the names are still unique (eg: Custom.AccountType vs HFM.AccountType). I'll write more about this one later. 5. Single Sign On DRM now supports Single Sign-On via HSS. For example, if you are using Oracle's OAM as your SSO solution then you configure HSS to use OAM just like you would before. You also configure DRM to use HSS, again just like before. Then you configure OAM to protect the DRM web app, like you would any other website. However once you do those things, users are no longer prompted to enter their username/password. They simply get redirected to OAM if they don't already have a login token, otherwise they pick their application and sail right into DRM. You can also avoid having to pick an application (see the next item) 6. URL-based navigation You can now specify the application you want to log into via the URL. Combined with SSO and your Intranet, it becomes easy to provide links on our intranet portal that take users directly into a specific DRM application. We also support specifying the Version, Hierarchy, and Node. Again, this can be used on your internal portal, but the scenarios get even more interesting when you are using workflow like Oracle BPEL you can automatically generate links within emails that will take users directly to a specific node in the UI. 7. Job status and cancellation A lot of the jobs now report their status and support true cancellation. Action Scripts also report a progress complete percentage since the amount of work is known ahead of time. 8. Action Script Options Action scripts support Option declarations at the top of the file so a script can self-describe (when specified in the file, the corresponding item in the file is ignored). For example: Option|DetectDelimiter Option|UsePropertyNames|true This will tell DRM to automatically detect the delimiter (a pipe symbol in this case) and that all references to properties are by Name, not by Label. Note that when you load a script in the UI, if you use Labels we automatically try to match them up if they are unique. Any duplicates are indicated and you are presented with a choice to pick which property you actually referred to. This is somewhat similar to Version substitution, but tailored for properties. There are other more minor changes and like I said earlier a lot of bug fixes and performance improvements. Hopefully I will get a chance to dig into some of these things in future blog posts.

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • Death March

    - by Nick Harrison
    It is a horrible sight to watch a project fail. There are few things as bad. Watching a project fail regardless of the reason is almost like sitting in a room with a "Dementor" from Harry Potter. It will literally suck all of the life and joy out of the room. Nearly every project that I have seen fail has failed because of political challenges or management challenges. Sometimes there are technical challenges that bring a project to its knees, but usually projects fail for less technical reasons. Here a few observations about projects failing for political reasons. Both the client and the consultants have to be committed to seeing the project succeed. Put simply, you cannot solve a problem when the primary stake holders do not truly want it solved. This could come from a consultant being more interested in extended the engagement. It could come from a client being afraid of what will happen to them once the problem is solved. It could come from disenfranchised stake holders. Sometimes a project is beset on all sides. When you find yourself working on a project that has this kind of threat, do all that you can to constrain the disruptive influences of the bad apples. If their influence cannot be constrained, you truly have no choice but to move on to a new project. Tough choices have to be made to make a project successful. These choices will affect everyone involved in the project. These choices may involve users not getting a change request through that they want. Developers may not get to use the tools that they want. Everyone may have to put in more hours that they originally planned. Steps may be skipped. Compromises will be made, but if everyone stays committed to the end goal, you can still be successful. If individuals start feeling disgruntled or resentful of the compromises reached, the project can easily be derailed. When everyone is not working towards a common goal, it is like driving with one foot on the break and one foot on the accelerator. Not only will you not get to where you are planning, you will also damage the car and possibly the passengers as well.   It is important to always keep the end result in mind. Regardless of the development methodology being followed, the end goal is not comprehensive documentation. In all cases, it is working software. Comprehensive documentation is nice but useless if the software doesn't work.   You can never get so distracted by the next goal that you fail to meet the current goal. Most projects are ultimately marathons. This means that the pace must be sustainable. Regardless of the temptations, you cannot burn the team alive. Processes will fail. Technology will get outdated. Requirements will change, but your people will adapt and learn and grow. If everyone on the team from the most senior analyst to the most junior recruit trusts and respects each other, there is no challenge that they cannot overcome. When everyone involved faces challenges with the attitude "This is my project and I will not let it fail" "You are my teammate and I will not let you fail", you will in fact not fail. When you find a team that embraces this attitude, protect it at all cost. Edward Yourdon wrote a book called Death March. In it, he included a graph for categorizing Death March project types based on the Happiness of the Team and the Chances of Success.   Chances are we have all worked on Death March projects. We will all most likely work on more Death March projects in the future. To a certain extent, they seem to be inevitable, but they should never be suicide or ugly. Ideally, they can all be "Mission Impossible" where everyone works hard, has fun, and knows that there is good chance that they will succeed. If you are ever lucky enough to work on such a project, you will know that sense of pride that comes from the eventual success. You will recognize a profound bond with the team that you worked with. Chances are it will change your life or at least your outlook on life. If you have not already read this book, get a copy and study it closely. It will help you survive and make the most out of your next Death March project.

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • Process Centric Banking: Loan Origination Solution

    - by Manish Palaparthy
    There is an old proverb that goes, "The difference between theory and practice is greater in practice than in theory". So, we keep doing numerous "Proof of Concepts" with our own products on various business cases to analyze them deeply, understand and explain to our customers. We then present our learnings as they happened. The awareness of each PoC should help readers increase the trustworthiness of the results coming out of these PoCs. I present one such PoC where we invested a lot of time&effort.  Process Centric Banking : Loan Origination Solution Loan Origination is a process by which a borrower applies for a new loan and the lender processes that application. Loan origination includes the series of steps taken by the bank from the point the customer shows interest in a loan product all the way to disbursal of funds. The Loan Origination process is relevant for many kind of lenders in Financial services: Banks, Credit Unions, NBFCs(Non Banking Financial Companies) and so on. For simplicity sake, I will use "Bank" as the lending institution in the rest of my article.  Loan Origination is one of the core processes for Banks as it is the process by which the it creates assets against which the Institution earns most of its profits from. A well tuned loan origination process can affect the Bank in many positive ways. Banks have always shown great interest in automating the loan origination process for the above reason. However, due the constant changes in customer environment, market dynamics, prevailing economic conditions, cost pressures & regulatory environment they run into lot of challenges. Let me categorize some of these challenges for you Customer Environment Multiple Channels: Customer can use any of the available channels (Internet Banking, Email, Fax, Branch, Phone Banking, ATM, Broker, Mobile, Snail Mail) to perform all or some of the activities related to her Visibility into the origination process: Expect immediate update on the status of loan processing & alert messages Reduced Turn Around Time: Expect loans to be processed with least turn around time Reduced loan processing fees: Partly due to market dynamics the customer expects the loan processing fee to be negligible Market Dynamics Competitive environment:  The competition keeps creating many variants of loan products to attract customers, the bank needs to create similar product variants with better offers to attract customers or keep existing ones Ability to migrate loans from one vendor to another: It has become really easy for retail customers to move from one bank to the other given the low fee of loan processing and highly attractive offers. How does the bank protect it's customer base while actively engaging with potential customers banking with competitor banks Flexibility to react to market developments: Market development greatly influence loan processing, underwriting, asset valuation, risk mitigation rules. Can the bank modify rules and policies, the idea is not just to react to market developments but to pro-actively manage new developments Economic conditions Constant change in various rates and their implications on the rates and rules applied when on-boarding a loan: How quickly can the bank apply changes to rates offered to customers when the central bank changes various rates Requirements of Audit by the central banker: Tough economic conditions have demanded much more stringent audit rules and tests. The banks needs to produce ready reports(historic & operational) for audit compliance Risk Mitigation: While risk mitigation has always been a key concern for the bank, this is the area where the bank's underwriters & risk analysts spend the maximum time when processing a loan application. In order to reduce TAT the bank cannot compromise on its risk mitigation strategies Cost pressures Reduce Cost of processing per application: To deliver a reduced loan processing fee to the customer, the bank needs to keep its cost per processing loan application low. Meet customer TAT expectations while reducing the queues and the systems being used to process the loan application: The loan application could potentially be spending a lot of time waiting in the queue for further processing. Different volumes & patterns of applications demand different queuing algorithms. The bank needs to have real-time visibility into these queues and have the flexibility to change queuing algorithms at runtime  Increase the use of electronic communication and reduce the branch channel usage: Lesser automation leads not only leads to Increased turn around time, it also impacts more costs to reach out to customers The objective of our PoC was to implement a Loan Origination Solution whose ownership lies with the bank and effectively meet the challenges listed above. We built a simple story board for the solution We then went about implementing our storyboard using Oracle BPM Suite, Webcenter Content : Imaging. The web UI has been built on ADF technolgies, while the integration with core-services has been implemented using the underlying SOA infrastructure. The BPM process model is quite exhaustive can meet all the challenges listed above to reasonable degree. A bank intending to implement an end-to-end Loan Origination Solution has multiple options at it's disposal. It can Develop a customer Loan Origination Application from scratch: Gives maximum opportunity to build what you want but inflexible to upgrade and maintain. Higher TCO in long term Buy a Packaged application & customize it: Customizing a generic loan application can be tedious and prove as difficult as above. Build it using many disparate & un-integrated tools: Initially seems easier than developing from scratch. But, without integrated tool sets this is not a viable approach either or A solution based on a Framework: Independent Services and Business Process Modeling provide decoupled architecture that is flexible. We built this framework end-to-end with processes the core process of loan origination & several sub-processes such as Analyse and define customer needs, customer credit verification, identity check processes, legal review process, New customer registration & risk assessment.

    Read the article

  • External USB 3 drive not recognized

    - by ilan123
    Ubuntu 12.10 64 bit seems not to recognize my external hard disk. It is a Vantec NST-310S3 external disk enclosure with a WD 3TB drive. The disk has two NTFS partitions. My PC is a dual boot system. Under Windows 7 the hard disk works fine but I can't make it work with Ubuntu. When the drive is connected to the PC then the command sudo fdisk -l seems to hang forever. Below are the output of lsusb and cat /proc/partitions without the external drive and then with it connected. I added also the last lines of the dmesg command at the end. First without the drive: ilan@linux:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 13ba:0017 Unknown PS/2 Keyboard+Mouse Adapter Bus 001 Device 004: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Bus 001 Device 005: ID 0ac8:3420 Z-Star Microelectronics Corp. Venus USB2.0 Camera ilan@linux:~$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 629043200 sda2 8 3 367001600 sda3 8 4 1 sda4 8 5 471859200 sda5 8 6 157286400 sda6 8 7 324115456 sda7 8 8 4101120 sda8 11 0 1048575 sr0 Second with the USB 3 drive: ilan@linux:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 004 Device 002: ID 174c:55aa ASMedia Technology Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 13ba:0017 Unknown PS/2 Keyboard+Mouse Adapter Bus 001 Device 004: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Bus 001 Device 005: ID 0ac8:3420 Z-Star Microelectronics Corp. Venus USB2.0 Camera ilan@linux:~$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 629043200 sda2 8 3 367001600 sda3 8 4 1 sda4 8 5 471859200 sda5 8 6 157286400 sda6 8 7 324115456 sda7 8 8 4101120 sda8 11 0 1048575 sr0 8 16 2930266584 sdb ilan@linux:~$ lsusb -v -s 004:002 Bus 004 Device 002: ID 174c:55aa ASMedia Technology Inc. Couldn't open device, some information will be missing Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 9 idVendor 0x174c ASMedia Technology Inc. idProduct 0x55aa bcdDevice 1.00 iManufacturer 2 iProduct 3 iSerial 1 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 44 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 ilan@linux:~$ sudo fdisk -l [sudo] password for ilan: Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf1b4f1ee Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 1258293247 629043200 7 HPFS/NTFS/exFAT /dev/sda3 1258293248 1992296447 367001600 7 HPFS/NTFS/exFAT /dev/sda4 1992298494 3907028991 957365249 f W95 Ext'd (LBA) /dev/sda5 1992298496 2936016895 471859200 7 HPFS/NTFS/exFAT /dev/sda6 2936018944 3250591743 157286400 7 HPFS/NTFS/exFAT /dev/sda7 3250593792 3898824703 324115456 83 Linux /dev/sda8 3898826752 3907028991 4101120 82 Linux swap / Solaris dmesg output after connecting the external drive: [ 23.740567] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 23.740786] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 49.144673] usb 4-1: >new SuperSpeed USB device number 2 using xhci_hcd [ 49.163039] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 49.166789] usb 4-1: >New USB device found, idVendor=174c, idProduct=55aa [ 49.166793] usb 4-1: >New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 49.166796] usb 4-1: >Product: AS2105 [ 49.166799] usb 4-1: >Manufacturer: ASMedia [ 49.166801] usb 4-1: >SerialNumber: 0123456789ABCDEF [ 49.206372] usbcore: registered new interface driver uas [ 49.228891] Initializing USB Mass Storage driver... [ 49.229042] scsi6 : usb-storage 4-1:1.0 [ 49.229115] usbcore: registered new interface driver usb-storage [ 49.229116] USB Mass Storage support registered. [ 64.045528] scsi 6:0:0:0: >Direct-Access WDC WD30 EZRX-00MMMB0 80.0 PQ: 0 ANSI: 0 [ 64.046224] sd 6:0:0:0: >Attached scsi generic sg2 type 0 [ 64.046881] sd 6:0:0:0: >[sdb] Very big device. Trying to use READ CAPACITY(16). [ 64.047610] sd 6:0:0:0: >[sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) [ 64.048368] sd 6:0:0:0: >[sdb] Write Protect is off [ 64.048373] sd 6:0:0:0: >[sdb] Mode Sense: 23 00 00 00 [ 64.048984] sd 6:0:0:0: >[sdb] No Caching mode page present [ 64.048987] sd 6:0:0:0: >[sdb] Assuming drive cache: write through [ 64.049297] sd 6:0:0:0: >[sdb] Very big device. Trying to use READ CAPACITY(16). [ 64.050942] sd 6:0:0:0: >[sdb] No Caching mode page present [ 64.050944] sd 6:0:0:0: >[sdb] Assuming drive cache: write through [ 94.245006] usb 4-1: >reset SuperSpeed USB device number 2 using xhci_hcd [ 94.262553] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 94.263805] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c00 [ 94.263808] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c40 [ 125.262722] usb 4-1: >reset SuperSpeed USB device number 2 using xhci_hcd [ 125.280304] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 125.281511] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c00 [ 125.281516] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c40

    Read the article

  • How do I mount my External HDD with filesystem type errors?

    - by Snuggie
    I am a relatively new Ubuntu user and I am having some difficulty mounting my external 2TB HDD. When I first installed Linux my external HDD was working just fine, however, it has stopped working and I have a lot of important files on there that I need. Before my HDD would automatically mount and no worries. Now, however, it doesn't automatically mount and when I try to manually mount it I keep running into filesystem type errors that I can't seem to get past. Below are images that depict my step by step process of how I am trying to mount my HDD along with the errors I am receiving. If anybody has any idea what I am doing wrong or how to correct the issue I would greatly appreciate it. Step 1) Ensure the computer recognizes my external HDD. pj@PJ:~$ dmesg ... [ 5790.367910] scsi 7:0:0:0: Direct-Access WD My Passport 0748 1022 PQ: 0 ANSI: 6 [ 5790.368278] scsi 7:0:0:1: Enclosure WD SES Device 1022 PQ: 0 ANSI: 6 [ 5790.370122] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 5790.370310] ses 7:0:0:1: Attached Enclosure device [ 5790.370462] ses 7:0:0:1: Attached scsi generic sg3 type 13 [ 5792.971601] sd 7:0:0:0: [sdb] 3906963456 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 5792.972148] sd 7:0:0:0: [sdb] Write Protect is off [ 5792.972162] sd 7:0:0:0: [sdb] Mode Sense: 47 00 10 08 [ 5792.972591] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.972605] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.975235] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.975249] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.987504] sdb: sdb1 [ 5792.988900] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.988911] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.988920] sd 7:0:0:0: [sdb] Attached SCSI disk Step 2) Check if it mounted properly (it does not) pj@PJ:~$ df -ah Filesystem Size Used Avail Use% Mounted on /dev/sda1 682G 3.9G 644G 1% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys none 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security udev 2.9G 4.0K 2.9G 1% /dev devpts 0 0 0 - /dev/pts tmpfs 1.2G 928K 1.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.9G 156K 2.9G 1% /run/shm gvfs-fuse-daemon 0 0 0 - /home/pj/.gvfs Step 3) Try mounting manually using NTFS and VFAT (both as SDB and SDB1) pj@PJ:~$ sudo mount /dev/sdb /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount /dev/sdb1 /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t ntfs /dev/sdb /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t vfat /dev/sdb /media/Passport/ mount: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so pj@PJ:~$ sudo mount -t ntfs /dev/sdb1 /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t vfat /dev/sdb1 /media/Passport/ mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Namespaces are obsolete

    - by Bertrand Le Roy
    To those of us who have been around for a while, namespaces have been part of the landscape. One could even say that they have been defining the large-scale features of the landscape in question. However, something happened fairly recently that I think makes this venerable structure obsolete. Before I explain this development and why it’s a superior concept to namespaces, let me recapitulate what namespaces are and why they’ve been so good to us over the years… Namespaces are used for a few different things: Scope: a namespace delimits the portion of code where a name (for a class, sub-namespace, etc.) has the specified meaning. Namespaces are usually the highest-level scoping structures in a software package. Collision prevention: name collisions are a universal problem. Some systems, such as jQuery, wave it away, but the problem remains. Namespaces provide a reasonable approach to global uniqueness (and in some implementations such as XML, enforce it). In .NET, there are ways to relocate a namespace to avoid those rare collision cases. Hierarchy: programmers like neat little boxes, and especially boxes within boxes within boxes. For some reason. Regular human beings on the other hand, tend to think linearly, which is why the Windows explorer for example has tried in a few different ways to flatten the file system hierarchy for the user. 1 is clearly useful because we need to protect our code from bleeding effects from the rest of the application (and vice versa). A language with only global constructs may be what some of us started programming on, but it’s not desirable in any way today. 2 may not be always reasonably worth the trouble (jQuery is doing fine with its global plug-in namespace), but we still need it in many cases. One should note however that globally unique names are not the only possible implementation. In fact, they are a rather extreme solution. What we really care about is collision prevention within our application. What happens outside is irrelevant. 3 is, more than anything, an aesthetical choice. A common convention has been to encode the whole pedigree of the code into the namespace. Come to think about it, we never think we need to import “Microsoft.SqlServer.Management.Smo.Agent” and that would be very hard to remember. What we want to do is bring nHibernate into our app. And this is precisely what you’ll do with modern package managers and module loaders. I want to take the specific example of RequireJS, which is commonly used with Node. Here is how you import a module with RequireJS: var http = require("http"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is of course importing a HTTP stack module into the code. There is no noise here. Let’s break this down. Scope (1) is provided by the one scoping mechanism in JavaScript: the closure surrounding the module’s code. Whatever scoping mechanism is provided by the language would be fine here. Collision prevention (2) is very elegantly handled. Whereas relocating is an afterthought, and an exceptional measure with namespaces, it is here on the frontline. You always relocate, using an extremely familiar pattern: variable assignment. We are very much used to managing our local variable names and any possible collision will get solved very easily by picking a different name. Wait a minute, I hear some of you say. This is only taking care of collisions on the client-side, on the left of that assignment. What if I have two libraries with the name “http”? Well, You can better qualify the path to the module, which is what the require parameter really is. As for hierarchical organization, you don’t really want that, do you? RequireJS’ module pattern does elegantly cover the bases that namespaces used to cover, but it also promotes additional good practices. First, it promotes usage of self-contained, single responsibility units of code through the closure-based, stricter scoping mechanism. Namespaces are somewhat more porous, as using/import statements can be used bi-directionally, which leads us to my second point… Sane dependency graphs are easier to achieve and sustain with such a structure. With namespaces, it is easy to construct dependency cycles (that’s bad, mmkay?). With this pattern, the equivalent would be to build mega-components, which are an easier problem to spot than a decay into inter-dependent namespaces, for which you need specialized tools. I really like this pattern very much, and I would like to see more environments implement it. One could argue that dependency injection has some commonalities with this for example. What do you think? This is the half-baked result of some morning shower reflections, and I’d love to read your thoughts about it. What am I missing?

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [concepts]

    - by Your DisplayName here!
    ACS v2 support two fundamental types of client identities– I like to call them “enterprise identities” (WS-*) and “web identities” (Google, LiveID, OpenId in general…). I also see two different “mind sets” when it comes to application design using the above identity types: Enterprise identities – often the fact that a client can present a token from a trusted identity provider means he is a legitimate user of the application. Trust relationships and authorization details have been negotiated out of band (often on paper). Web identities – the fact that a user can authenticate with Google et al does not necessarily mean he is a legitimate (or registered) user of an application. Typically additional steps are necessary (like filling out a form, email confirmation etc). Sometimes also a mixture of both approaches exist, for the sake of this post, I will focus on the web identity case. I got a number of questions how to implement the web identity scenario and after some conversations it turns out it is the old authentication vs. authorization problem that gets in the way. Many people use the IsAuthenticated property on IIdentity to make security decisions in their applications (or deny user=”?” in ASP.NET terms). That’s a very natural thing to do, because authentication was done inside the application and we knew exactly when the IsAuthenticated condition is true. Been there, done that. Guilty ;) The fundamental difference between these “old style” apps and federation is, that authentication is not done by the application anymore. It is done by a third party service, and in the case of web identity providers, in services that are not under our control (nor do we have a formal business relationship with these providers). Now the issue is, when you switch to ACS, and someone with a Google account authenticates, indeed IsAuthenticated is true – because that’s what he is! This does not mean, that he is also authorized to use the application. It just proves he was able to authenticate with Google. Now this obviously leads to confusion. How can we solve that? Easy answer: We have to deal with authentication and authorization separately. Job done ;) For many application types I see this general approach: Application uses ACS for authentication (maybe both enterprise and web identities, we focus on web identities but you could easily have a dual approach here) Application offers to authenticate (or sign in) via web identity accounts like LiveID, Google, Facebook etc. Application also maintains a database of its “own” users. Typically you want to store additional information about the user In such an application type it is important to have a unique identifier for your users (think the primary key of your user database). What would that be? Most web identity provider (and all the standard ACS v2 supported ones) emit a NameIdentifier claim. This is a stable ID for the client (scoped to the relying party – more on that later). Furthermore ACS emits a claims identifying the identity provider (like the original issuer concept in WIF). When you combine these two values together, you can be sure to have a unique identifier for the user, e.g.: Facebook-134952459903700\799880347 You can now check on incoming calls, if the user is already registered and if yes, swap the ACS claims with claims coming from your user database. One claims would maybe be a role like “Registered User” which can then be easily used to do authorization checks in the application. The WIF claims authentication manager is a perfect place to do the claims transformation. If the user is not registered, show a register form. Maybe you can use some claims from the identity provider to pre-fill form fields. (see here where I show how to use the Facebook API to fetch additional user properties). After successful registration (which may include other mechanisms like a confirmation email), flip the bit in your database to make the web identity a registered user. This is all very theoretical. In the next post I will show some code and provide a download link for the complete sample. More on NameIdentifier Identity providers “guarantee” that the name identifier for a given user in your application will always be the same. But different applications (in the case of ACS – different ACS namespaces) will see different name identifiers. This is by design to protect the privacy of users because identical name identifiers could be used to create “profiles” of some sort for that user. In technical terms they create the name identifier approximately like this: name identifier = Hash((Provider Internal User ID) + (Relying Party Address)) Why is this important to know? Well – when you change the name of your ACS namespace, the name identifiers will change as well and you will will lose your “connection” to your existing users. Oh an btw – never use any other claims (like email address or name) to form a unique ID – these can often be changed by users.

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Are there Negative Impact of opensource on commercial environment?

    - by Lostsoul
    I know this is not a good fit for Stack Overflow but wasn't sure if it was good for this site also so let me know if its not and I'll delete it. I love programming for fun but my role in my company is not technical. I have always loved the hacker culture and have been trying to drive that openness within my company from day one. My company has a very broad range of products and there are a few that are not strategic to us so I wanted to open source them (so we can focus on what makes us unique and open source the products that every firm has). Our industry does not open source(we would be the first firm to try this) and the feedback I'm getting from my management team is either 1) we'll destroy the industry or 2) all competitive commercial firms will unite against us and we'll be wiped out either way. I disagreed on both points because I think transparency will only grow our industry and our firm (think of McDonalds/KFC sharing their recipe openly, people may copy you, competitors may target you, but customers also may feel more comfortable buying your product. The value add, I believe, is in the delivery and experience not in hoarding the recipe). It's a big battle in my firm right now between the IT people who have seen the positive effects of sharing and the business people who think we'll be giving up everything (they prefer we sell parts we want to opensource, but in their defense this is standard when divesting something). Our industry is very secretive and I don't want to put anyone(even my competitors employees) out of a job yet I don't want to protect inefficient people by not being open with everyone. Yet I've seen so many amazing technologies created in interesting ways just by giving people freedom to take apart code and put it back together. I'm interested in hearing people's thoughts(doesn't have to be to my specific situation, I'm looking for the general lessons). Its a very stressful decision(but one I feel I must make) because if we go the open source route then there will be no going back. So what are your thoughts? Does open sourcing apply generally or is it only really applicable to software? Is it overall good for people in the industry and outside? I'm actually more interested in the negativeness effects(although positive are welcomed as well) Update: Long story short, although code is involved this is not so much about code as it is more about the idea of open sourcing. We are a mid sized quant hedge fund. We have some unique strategies but also have the standard long/short, arbitrage, global macro, etc.. funds. We are keeping the unique funds we have but the other stuff that everyone else has we are considering open sourcing (We have put in years of work & millions of dollars into. Our funds is pretty popular and our performance is either in first or second quartile so I suspect there will be interest but I don't know to what extent). The goal is not to get a community to work for us or anything, the goal is to let anyone who wants to tinker with it do so and create anything they want (it will not be part of our product line although I may unofficially allocate some our of staff's time to assist any community that grows). Although the code base is quite large, the value in this is the industry knowledge and approaches we have acquired (there are many books on artificial intelligence and quant trading but they are often years behind what's really going on as most firms forbid their staff from discussing what they are doing). We are also considering after we move our clients out to let the software still run and output the resulting portfolios for free as well so people can at least see the results(as long as we have avail. infrastructure). I think our main choices are, we can continue to fight for market share in a products that are becoming commoditized, we can shut the funds/products down(and keep the code but no one outside of our firm will ever learn from it) or we can open source it and let people do what they want. By open sourcing it, my idea is that the talent pool in the industry will grow because right now most of our hires have the same background (CFA, MBA, similar school, same experience,etc.. because we can't spend time training people so the industry 'standardizes' most people and thus the firms themselves start to look/act similar) but this may allow us to identify talent that has never been in the industry before (if we put a GPU license then as people learn from what we did, we can learn from what they do as well and maybe apply it to other areas of our firm). I see a lot of benefits but not many negatives while my peers at the company see the opposite.

    Read the article

  • Simple BizTalk Orchestration & Port Tutorial

    - by bosuch
    (This is a reference for a lunch & learn I'm giving at my company) This demo will create a BizTalk process that monitors a directory for an XML file, loads it into an orchestration, and drops it into a different directory. There’s no real processing going on (other than moving the file from one location to another), but this will introduce you to Messages, Orchestrations and Ports. To begin, create a new BizTalk Project names OrchestrationPortDemo: When the solution has been created, right-click the OrchestrationPortDemo solution name and select Add -> New Item. Add a BizTalk Orchestration named DemoOrchestration: Click Add and the orchestration will be created and displayed in the BizTalk Orchestration Designer. The designer allows you to visually create your business processes: Next, you will add a message (the basic unit of communication) to the orchestration. In the Orchestration View, right-click Messages and select New Message. In the message properties window, enter DemoMessage as the Identifier (the name), and select .NET Classes -> System.Xml.XmlDocument for Message Type. This indicates that we’ll be passing a standard Xml document in and out of the orchestration. Next, you will add Send and Receive shapes to the orchestration. From the toolbox, drag a Receive shape onto the orchestration (where it says “Drop a shape from the toolbox here”). Next, drag a Send shape directly below the Receive shape. For the properties of both shapes, select DemoMessage for Message – this indicates we’ll be passing around the message we created earlier. The Operation box will have a red exclamation mark next to it because no port has been specified. We will do this in a minute. On the Receive shape properties, you must be sure to select True for Activate. This indicates that the orchestration will be started upon receipt of a message, rather than being called by another orchestration. If you leave it set to false, when you try to build the application you’ll receive the error “You must specify at least one already-initialized correlation set for a non-activation receive that is on a non self-correlating port.” Now you’ll add ports to the orchestration. Ports specify how your orchestration will send and receive messages. Drag a port from the toolbox to the left-hand Port Surface, and the Port Configuration Wizard launches. For the first port (the receive port), enter the following information: Name: ReceivePort Select the port type to be used for this port: Create a new Port Type Port Type Name: ReceivePortType Port direction of communication: I’ll always be receiving <…> Port binding: Specify later By choosing “Specify later” you are choosing to bind the port (choose where and how it will send or receive its messages) at deployment time via the BizTalk Server Administration console. This allows you to change locations later without building and re-deploying the application. Next, drag a port to the right-hand Port Surface; this will be your send port. Configure it as follows: Name: SendPort Select the port type to be used for this port: Create a new Port Type Port Type Name: SendPortType Port direction of communication: I’ll always be sending <…> Port binding: Specify later Finally, drag the green arrow on the ReceivePort to the Receive_1 shape, and the green arrow on the SendPort to the Send_1 shape. Your orchestration should look like this: Now you have a couple final steps before building and deploying the application. In the Solution Explorer, right-click on OrchestrationPortDemo and select Properties. On the Signing tab, click “Sign the assembly”, and choose <New…> from the drop-down. Enter DemoKey as the Key file name, and deselect “Protect my key file with a password”. This will create the file DemoKey.snk in your solution. Signing the assembly gives it a strong name so that it can be deployed into the global assembly cache (GAC). Next, click the Deployment tab, and enter OrchestrationPortDemo as the Application Name. Save your solution. Click “Build OrchestrationPortDemo”. Your solution should (hopefully!) build with no errors. Click “Deploy OrchestrationPortDemo”. (Note – If you’re running Server 2008, Vista or Win7, you may get an error message. If so, close Visual Studio and run it as an administrator) That’s it! Your application is ready to be configured and fired up in the BizTalk Server Administration console, so stay tuned!

    Read the article

  • Selenium Webdriver (FirefoxDriver)

    - by Samareri
    Hey all, I'm using Selenium Webdriver to do some robottesting. Since some functions appear to only work in Firefox, I'm obligated to use Firefoxdriver. Now and then, something weird happens. Starting up te driver driver = new FirefoxDriver(); driver.get(URL); gets firefox to startup but not to go to the specified url. The strange thing is that it works on another computer with the same preferences set in Firefox. I solved this problem once by changing to another version of firefox, but this time this doesn't do the trick for me, it did however worked for the other developers. Yes, the error started for all developers on the same time, same day... My first question is: is it a firefox problem or Webdriver problem. Second question: how is it possible that it works on other pc's? Any help would be very appreciated Thanks Error: org.openqa.selenium.WebDriverException: Could not parse "". System info: os.name: 'Windows XP', os.arch: 'x86', os.version: '5.1', java.version: '1.6.0_18' Driver info: driver.version: firefox at org.openqa.selenium.firefox.Response.<init>(Response.java:53) at org.openqa.selenium.firefox.internal.AbstractExtensionConnection.nextResponse(AbstractExtensionConnection.java:258) at org.openqa.selenium.firefox.internal.AbstractExtensionConnection.readLoop(AbstractExtensionConnection.java:220) at org.openqa.selenium.firefox.internal.AbstractExtensionConnection.waitForResponseFor(AbstractExtensionConnection.java:213) at org.openqa.selenium.firefox.internal.AbstractExtensionConnection.sendMessageAndWaitForResponse(AbstractExtensionConnection.java:162) at org.openqa.selenium.firefox.FirefoxDriver.executeCommand(FirefoxDriver.java:329) at org.openqa.selenium.firefox.FirefoxDriver.sendMessage(FirefoxDriver.java:312) at org.openqa.selenium.firefox.FirefoxDriver.sendMessage(FirefoxDriver.java:308) at org.openqa.selenium.firefox.FirefoxDriver.fixId(FirefoxDriver.java:350) at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:130) at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:109) at be.....MMCRobotTest.login(MMCRobotTest.java:98) at be.....MMCRobotTestAttribute.testNewAttribute(MMCRobotTestAttribute.java:12) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: org.json.JSONException: A JSONObject text must begin with '{' at character 0 at org.json.JSONTokener.syntaxError(JSONTokener.java:496) at org.json.JSONObject.<init>(JSONObject.java:180) at org.json.JSONObject.<init>(JSONObject.java:403) at org.openqa.selenium.firefox.Response.<init>(Response.java:41) ... 30 more

    Read the article

  • NoPrimaryKeyException from DBUnit when loading a dataset into a databse

    - by Omar Kooheji
    I'm getting NoPrimaryKeyException when I try to run one of my unit tests which uses DBUnit. The datatable is created using Hibernate and is a join table between two classes mapping a many to many relationship. The annotations that define the relationship are as follows: @Override @ManyToMany @JoinTable(name="offset_file_offset_entries", joinColumns={@JoinColumn(name="offset_entry_id")},inverseJoinColumns={@JoinColumn(name="file_description_id")}) public List<OffsetEntry> getOffsets() { The other entries in in the XML file I'm using to define the dataset seem to work fine but not the join table. I get the following exception: org.dbunit.dataset.NoPrimaryKeyException: offset_file_offset_entries at org.dbunit.operation.UpdateOperation.getOperationData(UpdateOperation.java:72) at org.dbunit.operation.RefreshOperation$UpdateRowOperation.<init>(RefreshOperation.java:266) at org.dbunit.operation.RefreshOperation.createUpdateOperation(RefreshOperation.java:142) at org.dbunit.operation.RefreshOperation.execute(RefreshOperation.java:100) at org.dbunit.ext.mssql.InsertIdentityOperation.execute(InsertIdentityOperation.java:217) at uk.co.sabio.obscheduler.application.dao.AbstractBaseDatabaseTest.setUp(AbstractBaseDatabaseTest.java:57) at org.springframework.test.context.junit38.AbstractJUnit38SpringContextTests.runManaged(AbstractJUnit38SpringContextTests.java:332) at org.springframework.test.context.junit38.AbstractJUnit38SpringContextTests.access$0(AbstractJUnit38SpringContextTests.java:326) at org.springframework.test.context.junit38.AbstractJUnit38SpringContextTests$1.run(AbstractJUnit38SpringContextTests.java:216) at org.springframework.test.context.junit38.AbstractJUnit38SpringContextTests.runTest(AbstractJUnit38SpringContextTests.java:296) at org.springframework.test.context.junit38.AbstractJUnit38SpringContextTests.runTestTimed(AbstractJUnit38SpringContextTests.java:253) at org.springframework.test.context.junit38.AbstractJUnit38SpringContextTests.runBare(AbstractJUnit38SpringContextTests.java:213) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) the Dataset entry in question looks like this: <offset_file_offset_entries offset_entry_id="1" file_description_id="1" /> And matches up with the database which has both fields as primary keys (The Databse is MS SQL Server if that helps) There are corresponding entries in the two tables being Joined defined in the following xml: <dataset> <file_description file_path="src/test/resources/" file_pattern=".txt" file_description_id="1"/> <offset_file_description file_description_id="1"/> <offset_entries offset_entry_id="1" field_name="Field1" field_length="10" start_index="0"/> <offset_file_offset_entries offset_entry_id="1" file_description_id="1" /> </dataset> Do I have to define the primary keys in the hibernate annotations? If so How do I do so? do I have to change the way I define my dataset to imply that the two columns are a joint primary key? I'm not very proficient with hibernate or DBUnit for that matter and I'm at my wits end so any assistance would be really appreciated.

    Read the article

  • Python Ctypes Read/WriteProcessMemory() - Error 5/998 Help!

    - by user299805
    Please don't get scared but the following code, if you are familiar with ctypes or C it should be easy to read. I have been trying to get my ReadProcessMemory() and WriteProcessMemory() functions to be working for so long and have tried almost every possibility but the right one. It launches the target program, returns its PID and handle just fine. But I always get a error code of 5 - ERROR_ACCESS_DENIED. When I run the read function(forget the write for now). I am launching this program as what I believe to be a CHILD process with PROCESS_ALL_ACCESS or CREATE_PRESERVE_CODE_AUTHZ_LEVEL. I have also tried PROCESS_ALL_ACCESS and PROCESS_VM_READ when I open the handle. I can also say that it is a valid memory location because I can find it on the running program with CheatEngine. As for VirtualQuery() I get an error code of 998 - ERROR_NOACCESS which further confirms my suspicion of it being some security/privilege problem. Any help or ideas would be very appreciated, again, it's my whole program so far, don't let it scare you =P. from ctypes import * from ctypes.wintypes import BOOL import binascii BYTE = c_ubyte WORD = c_ushort DWORD = c_ulong LPBYTE = POINTER(c_ubyte) LPTSTR = POINTER(c_char) HANDLE = c_void_p PVOID = c_void_p LPVOID = c_void_p UNIT_PTR = c_ulong SIZE_T = c_ulong class STARTUPINFO(Structure): _fields_ = [("cb", DWORD), ("lpReserved", LPTSTR), ("lpDesktop", LPTSTR), ("lpTitle", LPTSTR), ("dwX", DWORD), ("dwY", DWORD), ("dwXSize", DWORD), ("dwYSize", DWORD), ("dwXCountChars", DWORD), ("dwYCountChars", DWORD), ("dwFillAttribute",DWORD), ("dwFlags", DWORD), ("wShowWindow", WORD), ("cbReserved2", WORD), ("lpReserved2", LPBYTE), ("hStdInput", HANDLE), ("hStdOutput", HANDLE), ("hStdError", HANDLE),] class PROCESS_INFORMATION(Structure): _fields_ = [("hProcess", HANDLE), ("hThread", HANDLE), ("dwProcessId", DWORD), ("dwThreadId", DWORD),] class MEMORY_BASIC_INFORMATION(Structure): _fields_ = [("BaseAddress", PVOID), ("AllocationBase", PVOID), ("AllocationProtect", DWORD), ("RegionSize", SIZE_T), ("State", DWORD), ("Protect", DWORD), ("Type", DWORD),] class SECURITY_ATTRIBUTES(Structure): _fields_ = [("Length", DWORD), ("SecDescriptor", LPVOID), ("InheritHandle", BOOL)] class Main(): def __init__(self): self.h_process = None self.pid = None def launch(self, path_to_exe): CREATE_NEW_CONSOLE = 0x00000010 CREATE_PRESERVE_CODE_AUTHZ_LEVEL = 0x02000000 startupinfo = STARTUPINFO() process_information = PROCESS_INFORMATION() security_attributes = SECURITY_ATTRIBUTES() startupinfo.dwFlags = 0x1 startupinfo.wShowWindow = 0x0 startupinfo.cb = sizeof(startupinfo) security_attributes.Length = sizeof(security_attributes) security_attributes.SecDescriptior = None security_attributes.InheritHandle = True if windll.kernel32.CreateProcessA(path_to_exe, None, byref(security_attributes), byref(security_attributes), True, CREATE_PRESERVE_CODE_AUTHZ_LEVEL, None, None, byref(startupinfo), byref(process_information)): self.pid = process_information.dwProcessId print "Success: CreateProcess - ", path_to_exe else: print "Failed: Create Process - Error code: ", windll.kernel32.GetLastError() def get_handle(self, pid): PROCESS_ALL_ACCESS = 0x001F0FFF PROCESS_VM_READ = 0x0010 self.h_process = windll.kernel32.OpenProcess(PROCESS_VM_READ, False, pid) if self.h_process: print "Success: Got Handle - PID:", self.pid else: print "Failed: Get Handle - Error code: ", windll.kernel32.GetLastError() windll.kernel32.SetLastError(10000) def read_memory(self, address): buffer = c_char_p("The data goes here") bufferSize = len(buffer.value) bytesRead = c_ulong(0) if windll.kernel32.ReadProcessMemory(self.h_process, address, buffer, bufferSize, byref(bytesRead)): print "Success: Read Memory - ", buffer.value else: print "Failed: Read Memory - Error Code: ", windll.kernel32.GetLastError() windll.kernel32.CloseHandle(self.h_process) windll.kernel32.SetLastError(10000) def write_memory(self, address, data): count = c_ulong(0) length = len(data) c_data = c_char_p(data[count.value:]) null = c_int(0) if not windll.kernel32.WriteProcessMemory(self.h_process, address, c_data, length, byref(count)): print "Failed: Write Memory - Error Code: ", windll.kernel32.GetLastError() windll.kernel32.SetLastError(10000) else: return False def virtual_query(self, address): basic_memory_info = MEMORY_BASIC_INFORMATION() windll.kernel32.SetLastError(10000) result = windll.kernel32.VirtualQuery(address, byref(basic_memory_info), byref(basic_memory_info)) if result: return True else: print "Failed: Virtual Query - Error Code: ", windll.kernel32.GetLastError() main = Main() address = None main.launch("C:\Program Files\ProgramFolder\Program.exe") main.get_handle(main.pid) #main.write_memory(address, "\x61") while 1: print '1 to enter an address' print '2 to virtual query address' print '3 to read address' choice = raw_input('Choice: ') if choice == '1': address = raw_input('Enter and address: ') if choice == '2': main.virtual_query(address) if choice == '3': main.read_memory(address) Thanks!

    Read the article

  • 401 error when consuming a Web service with HTTP Basic authentication using CXF

    - by seanhodges
    I'm trying to consume a remote Web service that uses HTTP basic authentication, using Apache CXF, within a JUnit test. The error I am getting is: javax.xml.ws.WebServiceException: Failed to access the WSDL at: http://localhost:8080/services/MyService?wsdl. It failed with: Server returned HTTP response code: 401 for URL: http://localhost:8080/services/MyService?wsdl. at com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.tryWithMex(RuntimeWSDLParser.java:151) at com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:133) at com.sun.xml.internal.ws.client.WSServiceDelegate.parseWSDL(WSServiceDelegate.java:254) at com.sun.xml.internal.ws.client.WSServiceDelegate.<init>(WSServiceDelegate.java:217) at com.sun.xml.internal.ws.client.WSServiceDelegate.<init>(WSServiceDelegate.java:165) at com.sun.xml.internal.ws.spi.ProviderImpl.createServiceDelegate(ProviderImpl.java:93) at javax.xml.ws.Service.<init>(Service.java:76) at com.wave2.marketplace.importer.impl.adportal.ws.MyServiceService.<init>(MyServiceService.java:37) at com.wave2.marketplace.importer.impl.adportal.MyWSTest.testConsumingTheWS(MyWSTest.java:22) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.io.IOException: Server returned HTTP response code: 401 for URL: http://localhost:8080/services/MyService?wsdl at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1269) at java.net.URL.openStream(URL.java:1029) at com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.createReader(RuntimeWSDLParser.java:793) at com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.resolveWSDL(RuntimeWSDLParser.java:251) at com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:118) ... 26 more Having read this StackOverflow post, I have attempted to add the auth credentials to my request context, as follows: @Test public void testConsumingTheWS() throws Exception { URL wsdl = new URL("http://localhost:8080/services/MyService?wsdl"); MyServiceService provider = new MyServiceService(wsdl); // <-- Error occurs here MyService service = provider.getMyService(); BindingProvider binding = (BindingProvider)service; binding.getRequestContext().put(BindingProvider.USERNAME_PROPERTY, "username"); binding.getRequestContext().put(BindingProvider.PASSWORD_PROPERTY, "password"); Ping out = service.getPing(); assertNotNull(out); } However, as my in-line comment indicates, the error is occurring before the BindingProvider code is reached, so the error remains the same. I did have a read of this article and its follow-up, but so far I've had trouble determining how to go about adding the interceptor code without the use of Spring (this is for a JUnit test). How might I go about authenticating against this Web service?

    Read the article

  • Why aren't the :locals hash variables being passed in to a partial, when called from inside my rake

    - by marshally
    I need to render a bunch of painfully long running partials using a rake task. When I try to pull the partial from a rake task, I get the dreaded "Called id for nil, which would mistakenly be 4" error, which usually means that my locals hash has not been properly set into the partial. Here's the rake task (some variable names have been changed to protect the innocent): namespace :precache do desc "Precache stuff" task :precache => :environment do av = ActionView::Base.new(Rails::Configuration.new.view_path, {}) av.class_eval do include ApplicationHelper end @user = User.find(21) @rank = Rank.find(2) data = av.render(:partial => "reports/listing", :locals => {:user => @user, :rank => @rank}) end end And this is the error that I am getting: ** Execute precache:precache rake aborted! Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id On line #1 of app/views/reports/listing.html.erb 1: <%- @rid = @rank.id %> 2: <%- @cid = @user.id %> 3: <%- cache(:action => 'reports', :key => [arg1, arg2, arg3] ) do %> 4: <%- app/views/reports/_downline_js.html.erb:1 lib/tasks/precache_fragments.rake:12 rake (0.8.7) lib/rake.rb:636:in `call' rake (0.8.7) lib/rake.rb:636:in `execute' rake (0.8.7) lib/rake.rb:631:in `each' rake (0.8.7) lib/rake.rb:631:in `execute' rake (0.8.7) lib/rake.rb:597:in `invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' rake (0.8.7) lib/rake.rb:590:in `invoke_with_call_chain' rake (0.8.7) lib/rake.rb:583:in `invoke' rake (0.8.7) lib/rake.rb:2051:in `invoke_task' rake (0.8.7) lib/rake.rb:2029:in `top_level' rake (0.8.7) lib/rake.rb:2029:in `each' rake (0.8.7) lib/rake.rb:2029:in `top_level' rake (0.8.7) lib/rake.rb:2068:in `standard_exception_handling' rake (0.8.7) lib/rake.rb:2023:in `top_level' rake (0.8.7) lib/rake.rb:2001:in `run' rake (0.8.7) lib/rake.rb:2068:in `standard_exception_handling' rake (0.8.7) lib/rake.rb:1998:in `run' rake (0.8.7) bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 details: I'm using Rails 2.3.5 and Ruby 1.8.7. Developing on Mac OSX. Eventually I will be deploying to Heroku.

    Read the article

  • .NET: understanding web.config in asp.net

    - by mark smith
    Hi there, Does anyone know of a good link to explain how to use the web.config...... For example, i am using forms authentication... and i notice there is a system.web and then it closed /system.web and then below configuration there are additional location tags here is an example, if you ntoice there is an authentication mode=forms with authorization i presume this is the ROOT....... It is also self contained within a system.web .... Below this there are more location= with system.web tags.... I have never really understand what i am actually doing.. I have tried checkign the MSDN documentation but still i don't fully understand up.... Can anyone help? If you notice with my example.... everything is stored in 1 web.config... i thought the standard waas create a standard web.config and then create another web.config in the directory where i wish to protect it..??? <configuration> <system.web> <compilation debug="true" strict="false" explicit="true" targetFramework="4.0" /> <authentication mode="Forms"> <forms loginUrl="Login.aspx" defaultUrl="Login.aspx" cookieless="UseCookies" timeout="60"/> </authentication> <authorization> <allow users="*"/> </authorization> </system.web> <location path="Forms"> <system.web> <authorization> <deny users="?"/> <allow users="*"/> </authorization> </system.web> </location> <location path="Forms/Seguridad"> <system.web> <authorization> <allow roles="Administrador"/> <deny users="?"/> </authorization> </system.web> </location>

    Read the article

  • How to use multiple flatpages models in a django app?

    - by the_drow
    I have multiple models that can be converted to flatpages but have to have some extra information (For example I have an about us page but I also have a blog). However I understand that there must be only one flatpages model since the middleware only returns the flatpages instance and does not resolve the child models. What do I have to do? EDIT: It seems I need to change the views. Here's the current code: from django.contrib.flatpages.models import FlatPage from django.template import loader, RequestContext from django.shortcuts import get_object_or_404 from django.http import HttpResponse, HttpResponseRedirect from django.conf import settings from django.core.xheaders import populate_xheaders from django.utils.safestring import mark_safe from django.views.decorators.csrf import csrf_protect DEFAULT_TEMPLATE = 'flatpages/default.html' # This view is called from FlatpageFallbackMiddleware.process_response # when a 404 is raised, which often means CsrfViewMiddleware.process_view # has not been called even if CsrfViewMiddleware is installed. So we need # to use @csrf_protect, in case the template needs {% csrf_token %}. # However, we can't just wrap this view; if no matching flatpage exists, # or a redirect is required for authentication, the 404 needs to be returned # without any CSRF checks. Therefore, we only # CSRF protect the internal implementation. def flatpage(request, url): """ Public interface to the flat page view. Models: `flatpages.flatpages` Templates: Uses the template defined by the ``template_name`` field, or `flatpages/default.html` if template_name is not defined. Context: flatpage `flatpages.flatpages` object """ if not url.endswith('/') and settings.APPEND_SLASH: return HttpResponseRedirect("%s/" % request.path) if not url.startswith('/'): url = "/" + url # Here instead of getting the flat page it needs to find if it has a page with a child model. f = get_object_or_404(FlatPage, url__exact=url, sites__id__exact=settings.SITE_ID) return render_flatpage(request, f) @csrf_protect def render_flatpage(request, f): """ Internal interface to the flat page view. """ # If registration is required for accessing this page, and the user isn't # logged in, redirect to the login page. if f.registration_required and not request.user.is_authenticated(): from django.contrib.auth.views import redirect_to_login return redirect_to_login(request.path) if f.template_name: t = loader.select_template((f.template_name, DEFAULT_TEMPLATE)) else: t = loader.get_template(DEFAULT_TEMPLATE) # To avoid having to always use the "|safe" filter in flatpage templates, # mark the title and content as already safe (since they are raw HTML # content in the first place). f.title = mark_safe(f.title) f.content = mark_safe(f.content) # Here I need to be able to configure what I am passing in the context c = RequestContext(request, { 'flatpage': f, }) response = HttpResponse(t.render(c)) populate_xheaders(request, response, FlatPage, f.id) return response

    Read the article

  • What is "Either the application has not called WSAStartup, or WSAStartup failed"

    - by Am1rr3zA
    I develop a program which connect to a web-server through network every thing works fine until I try to use some dongle to protect my software. the dongle has network feature too and it's API work under network infrastructure. when I added the Dongle checking code to my program I got this error: "Either the application has not called WSAStartup, or WSAStartup failed" I don't have any idea what is going on? I put the block of code which encounter exception here. the scenario I got the exception is I log in to the program (everything works fine) the plug out the dongle then the program stop and ask for dongle and I plug in the dongle again and try to log in bu I got exception on line response = (HttpWebResponse)request.GetResponse(); DongleService unikey = new DongleService(); checkDongle = unikey.isConnectedNow(); if (checkDongle) { isPass = true; this.username = txtbxUser.Text; this.pass = txtbxPass.Text; this.IP = combobxServer.Text; string uri = @"https://" + combobxServer.Text + ":5002num_events=1"; request = (HttpWebRequest)WebRequest.Create(uri); request.Proxy = null; request.Credentials = new NetworkCredential(this.username, this.pass); ServicePointManager.ServerCertificateValidationCallback = ((sender, certificate, chain, sslPolicyErrors) => true); response = (HttpWebResponse)request.GetResponse(); Properties.Settings.Default.User = txtbxUser.Text; int index = _servers.FindIndex(p => p == combobxServer.Text); if (index == -1) { _servers.Add(combobxServer.Text); Config_Save.SaveServers(_servers); _servers = Config_Save.LoadServers(); } Properties.Settings.Default.Server = combobxServer.Text; // also save the password if (checkBox1.CheckState.ToString() == "Checked") Properties.Settings.Default.Pass = txtbxPass.Text; Properties.Settings.Default.settingLoginUsername = this.username; Properties.Settings.Default.settingLoginPassword = this.pass; Properties.Settings.Default.settingLoginPort = "5002"; Properties.Settings.Default.settingLoginIP = this.IP; Properties.Settings.Default.isLogin = "guest"; Properties.Settings.Default.Save(); response.Close(); request.Abort(); this.isPass = true; this.Close(); } else { MessageBox.Show("Please Insert Correct Dongle!", "Dongle Error", MessageBoxButtons.OK, MessageBoxIcon.Stop); }

    Read the article

  • A mysterious compilation error: cannot convert from 'const boost::shared_ptr<T>' to 'const boost::shared_ptr<T>'

    - by Stephane Rolland
    I wanted to protect the access to a log file that I use for multithreaded logging with boostlog library. I tried this stream class class ThreadSafeStream { public: template <typename TInput> const ThreadSafeStream& operator<< (const TInput &tInput) const { // some thread safe file access return *this; } }; using it this way (text_sink is a boostlog object): //... m_spSink.reset(new text_sink); text_sink::locked_backend_ptr pBackend = m_spSink->locked_backend(); const boost::shared_ptr< ThreadSafeStream >& spFileStream = boost::make_shared<ThreadSafeStream>(); pBackend->add_stream(spFileStream); // this causes the compilation error and I get this mysterious error: cannot convert from 'const boost::shared_ptr<T>' to 'const boost::shared_ptr<T>' the whole compile error: Log.cpp(79): error C2664: 'boost::log2_mt_nt5::sinks::basic_text_ostream_backend<CharT>::add_stream' : cannot convert parameter 1 from 'const boost::shared_ptr<T>' to 'const boost::shared_ptr<T> &' 1> with 1> [ 1> CharT=char 1> ] 1> and 1> [ 1> T=ThreadSafeStream 1> ] 1> and 1> [ 1> T=std::basic_ostream<char,std::char_traits<char>> 1> ] 1> Reason: cannot convert from 'const boost::shared_ptr<T>' to 'const boost::shared_ptr<T>' 1> with 1> [ 1> T=ThreadSafeStream 1> ] 1> and 1> [ 1> T=std::basic_ostream<char,std::char_traits<char>> 1> ] 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called I suspect that I am not well defining the operator<<()... but I don't find what is wrong.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >