Search Results

Search found 1725 results on 69 pages for 'greg the ant'.

Page 68/69 | < Previous Page | 64 65 66 67 68 69  | Next Page >

  • Talking JavaOne with Rock Star Kirk Pepperdine

    - by Janice J. Heiss
    Kirk Pepperdine is not only a JavaOne Rock Star but a Java Champion and a highly regarded expert in Java performance tuning who works as a consultant, educator, and author. He is the principal consultant at Kodewerk Ltd. He speaks frequently at conferences and co-authored the Ant Developer's Handbook. In the rapidly shifting world of information technology, Pepperdine, as much as anyone, keeps up with what's happening with Java performance tuning. Pepperdine will participate in the following sessions: CON5405 - Are Your Garbage Collection Logs Speaking to You? BOF6540 - Java Champions and JUG Leaders Meet Oracle Executives (with Jeff Genender, Mattias Karlsson, Henrik Stahl, Georges Saab) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Ellen Kraffmiller Martijn Verburg, Jeff Genender, and Henri Tremblay) I asked him what technological changes need to be taken into account in performance tuning. “The volume of data we're dealing with just seems to be getting bigger and bigger all the time,” observed Pepperdine. “A couple of years ago you'd never think of needing a heap that was 64g, but today there are deployments where the heap has grown to 256g and tomorrow there are plans for heaps that are even larger. Dealing with all that data simply requires more horse power and some very specialized techniques. In some cases, teams are trying to push hardware to the breaking point. Under those conditions, you need to be very clever just to get things to work -- let alone to get them to be fast. We are very quickly moving from a world where everything happens in a transaction to one where if you were to even consider using a transaction, you've lost." When asked about the greatest misconceptions about performance tuning that he currently encounters, he said, “If you have a performance problem, you should start looking at code at the very least and for that extra step, whip out an execution profiler. I'm not going to say that I never use execution profilers or look at code. What I will say is that execution profilers are effective for a small subset of performance problems and code is literally the last thing you should look at.And what is the most exciting thing happening in the world of Java today? “Interesting question because so many people would say that nothing exciting is happening in Java. Some might be disappointed that a few features have slipped in terms of scheduling. But I'd disagree with the first group and I'm not so concerned about the slippage because I still see a lot of exciting things happening. First, lambda will finally be with us and with lambda will come better ways.” For JavaOne, he is proctoring for Heinz Kabutz's lab. “I'm actually looking forward to that more than I am to my own talk,” he remarked. “Heinz will be the third non-Sun/Oracle employee to present a lab and the first since Oracle began hosting JavaOne. He's got a great message. He's spent a ton of time making sure things are going to work, and we've got a great team of proctors to help out. After that, getting my talk done, the Java Champion's panel session and then kicking back and just meeting up and talking to some Java heads."Finally, what should Java developers know that they currently do not know? “’Write Once, Run Everywhere’ is a great slogan and Java has come closer to that dream than any other technology stack that I've used. That said, different hardware bits work differently and as hard as we try, the JVM can't hide all the differences. Plus, if we are to get good performance we need to work with our hardware and not against it. All this implies that Java developers need to know more about the hardware they are deploying to.”

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • Reading the tea leaves from Windows Azure support

    - by jamiet
    A few idle thoughts… Three months ago I had an issue regarding Windows Azure where I was unable to login to the management portal. At the time I contacted Azure support, the issue was soon resolved and I thought no more about it. Until today that is when I received an email from Azure support providing a detailed analysis of the root cause, the fix and moreover precise details about when and where things occurred. The email itself is interesting and I have included the entirety of it below. A few things were interesting to me: The level of detail and the diligence in investigating and reporting the issue I found really rather impressive. They even outline the number of users that were affected (127 in case you can’t be bothered reading). Compare this to the quite pathetic support that another division within Microsoft, Skype, provided to Greg Low recently: Skype support and dead parrot sketches   This line: “Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th. This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification. ” I also found to be particularly interesting. I have long thought that one of the reasons Microsoft has proved to be such a money-making machine in the enterprise is because they provide the infrastructure and then upsell on top of that – and nothing is more infrastructural than Active Directory. It has struck me of late that they are trying to make the same play of late in the cloud by tying all their services into Azure Active Directory and here we see a clear indication of that by making AAD the authentication mechanism for anyone using Windows Azure. I get the feeling that we’re going to hear much much more about AAD in the future; isn’t it about time we could log on to SQL Azure Windows Azure SQL Database without resorting to SQL authentication, for example? And why do Microsoft have two identity providers – Microsoft Account (aka Windows Live ID) and AAD – isn’t it about time those things were combined? As I said, just some idle thoughts. Below is the transcript of the email if you are interested. @Jamiet  This is regarding the support request <redacted> where in you were not able to login into the windows azure management portal with live id. We are providing you with the summary, root cause analysis and information about permanent fix: Incident Title: You were unable to access Windows Azure Portal after Microsoft Account to Azure Active Directory account Migration. Service Impacted: Management Portal Incident Start Date and Time: 8/24/2012 4:30:00 PM Date and Time Service was Restored: 10/17/2012 12:00:00 AM Summary: Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th.   This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification.   While this migration was largely transparent to Windows Azure users, a small number of users whose sign-in names were part of a Windows Live Custom Domain were unable to login.   This incompatibility was not discovered during the Quality Assurance testing phase prior to the migration. Customer Impact: Customers whose sign-in names were part of a Windows Live Custom Domain were unable to sign-in the Management Portal after ~4:00 p.m. PST on August 24th, 2012.   We determined that the issue did impact at least 127 users in 98 of these Windows Live Custom Domains and had a maximum potential impact of 1,110 users in total. Root Cause: The root cause of the issue was an incompatibility in the AAD authentication service to handle logins from Microsoft accounts whose sign-in names were part of a Windows Live Custom Domains.  This issue was not discovered during the Quality Assurance testing phase prior to the migration from Microsoft Account (MSA) to AAD. Mitigations: The issue was mitigated for the majority of affected users by 8:20 a.m. PST on August 25th, 2012 by running some internal scripts to correct many known Windows Live Custom Domains.   The remaining affected domains fell into two categories: Windows Live Custom Domains that were not corrected by 8/25/2012. An additional 48 Windows Live Custom Domains were fixed in the weeks following the incident within 2 business days after the AAD team received an escalation from product support regarding those accounts. Windows Live Custom domains that were also provisioned in Office365. Some of the affected Windows Live Custom Domains had already been provisioned in AAD because their owners signed up for Office365 which is a service that also uses AAD.   In these cases the Azure customers had to work around the issue by renaming their Microsoft Account or using a different Microsoft Account to administer their Azure subscription. Permanent Fix: The Azure Active Directory team permanently fixed the issue for all customers on 10/17/2012 in an upgraded release of the AAD service.

    Read the article

  • How to install GIT on an offline RHEL?

    - by Stijn Vanpoucke
    I'm using the following commands from the manual to install GIT $ tar -zxf git-1.7.2.2.tar.gz $ cd git-1.7.2.2 $ make prefix=/usr/local all $ sudo make prefix=/usr/local install but I'm receiving the following exceptions ... cache.h: At top level: cache.h:746: error: expected declaration specifiers or â...â before âtime_tâ cache.h:889: warning: âstruct timevalâ declared inside parameter list cache.h:895: warning: âstruct timevalâ declared inside parameter list cache.h:970: error: expected specifier-qualifier-list before âoff_tâ cache.h:979: error: expected specifier-qualifier-list before âoff_tâ cache.h:997: error: expected specifier-qualifier-list before âoff_tâ cache.h:1057: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1063: error: expected declaration specifiers or â...â before âuint32_tâ cache.h:1064: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before ânt h_packed_object_offsetâ cache.h:1065: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âfi nd_pack_entry_oneâ cache.h:1067: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1069: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1070: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1094: error: expected specifier-qualifier-list before âoff_tâ cache.h:1168: error: expected â)â before â*â token cache.h:1177: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âre ad_in_fullâ cache.h:1178: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âwr ite_in_fullâ cache.h:1179: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âwr ite_str_in_fullâ cache.h:1252: error: expected declaration specifiers or â...â before âFILEâ In file included from credential-store.c:2: credential.h:28: error: expected declaration specifiers or â...â before âFILEâ credential.h:29: error: expected declaration specifiers or â...â before âFILEâ In file included from credential-store.c:4: parse-options.h:115: error: expected specifier-qualifier-list before âintptr_tâ credential-store.c: In function âparse_credential_fileâ: credential-store.c:13: error: âFILEâ undeclared (first use in this function) credential-store.c:13: error: âfhâ undeclared (first use in this function) credential-store.c:17: warning: implicit declaration of function âfopenâ credential-store.c:19: error: âerrnoâ undeclared (first use in this function) credential-store.c:19: error: âENOENTâ undeclared (first use in this function) credential-store.c:24: error: too many arguments to function âstrbuf_getlineâ credential-store.c:24: error: âEOFâ undeclared (first use in this function) credential-store.c:39: warning: implicit declaration of function âfcloseâ credential-store.c: In function âprint_entryâ: credential-store.c:44: warning: implicit declaration of function âprintfâ credential-store.c:44: warning: incompatible implicit declaration of built-in fu nction âprintfâ credential-store.c: In function âmainâ: credential-store.c:132: warning: implicit declaration of function âumaskâ credential-store.c:144: error: âstdinâ undeclared (first use in this function) credential-store.c:144: error: too many arguments to function âcredential_readâ credential-store.c:147: warning: implicit declaration of function âstrcmpâ Is this because I didn't install the dependencies? apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev How do I install them offline?

    Read the article

  • Automating silent software deployments on Solaris 10

    - by datSilencer
    Hello everyone. Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10. Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example: Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations. Checking that target application folders exist and if not, then create them with preconfigured path values defined when the package was assembled. Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them. Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system. Obviously, doing this for many boxes (the goal in question) by hand can be slow and error prone. I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another. 1) Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort. 2) Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P. 3) Ant or Gradle scripts. They may be an option since the boxes also have java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting. It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this. I thank you for your time and help.

    Read the article

  • Why won't my Windows 8 Command line update its path

    - by mawcsco
    I needed to add a new entry to my PATH variable. This is a common activity for me in my job, but I've recently started using Windows 8. I assumed the process would be similar to Windows 7, Vista, XP... Here's my sequence of events: Open System properties (Start- [type "Control Panel"] - Control Panel\System and Security\System - Advanced system settings - Environment Variables) Add the new path to beginning of my USER PATH variable (C:\dev\Java\apache-ant-1.8.4\bin;) Opened a command prompt (Start - [type "command prompt" enter] - [type "path" enter] My new path entry is not available (see attached image and vide). I Duplicated the exact same process on a Windows 7 machine and it worked. EDIT Windows 8 Environment Variables and Command Prompt video EDIT This is definitely not the behavior of Windows 7. Watch this video to see the behavior I expect working in Windows 7. http://youtu.be/95JXY5X0fII EDIT 5/31/2013 So, after much frustration, I wrote a small C# app to test the WM_SETTINGCHANGE event. This code receives the event in both Windows 7 and Windows 8. However, in Windows 8 on my system, I do not get the correct path; but, I do in Windows 7. This could not be reproduced in other Windows 8 systems. Here is the C# code. using System; using Microsoft.Win32; public sealed class App { static void Main() { SystemEvents.UserPreferenceChanging += new UserPreferenceChangingEventHandler(OnUserPreferenceChanging); Console.WriteLine("Waiting for system events."); Console.WriteLine("Press <Enter> to exit."); Console.ReadLine(); } static void OnUserPreferenceChanging(object sender, UserPreferenceChangingEventArgs e) { Console.WriteLine("The user preference is changing. Category={0}", e.Category); Console.WriteLine("path={0}", System.Environment.GetEnvironmentVariable("PATH")); } } OnUserPreferenceChanging is equivalent to WM_SETTINGCHANGE C# program running in Windows 7 (you can see the event come through and it picks up the correct path). C# program running in Windows 8 (you can see the event come through, but the wrong path). There is something about my environment that is precipitating this problem. However, is this a Windows 8 bug?

    Read the article

  • Linux Best Practices

    - by Zac
    I'm a life-long Windows developer switching over to Linux for the first time, and I'm starting off with Ubuntu to ease the learning curve. My new laptop will primarily be a development machine: 6GB RAM, 320 GB HD. I'd like there to be 2 non-root users: (a) Development, which will always be me, and (b) Guest, for anyone else. I assume the root user is added by default, like System Administrator in Windows. (1) I'd like to mount /home to its own partition, but how does this work if I have two user accounts (Development and Guest)? Are there 2 separate /home directories, or do they get shared? Is it possible to allocate more space for Development and only a tiny bit of space for Guest in GRUB2? How?!?! (2) I'm assuming that its okay that all of my development tools (Eclipse & plugins, SVN, JUnit, ant, etc.) and Java will end up getting installed in non-/home directories such as /usr and /opt, but that my Eclipse/SVN workspace will live under my /home directory on a separate partition... any problems, issues, concerns with that? (3) As far as partitioning schemes, nothing too complicated, but not plain Jane either: Boot Partition, 512 MB, in case I want to install other OSes Ubuntu & non-/home file system, 187.5 GB Swap Partition, 12 GB = RAM x 2 /home Partition, 120 GB I don't have any bulky media data (I don't have music or video libraries, this is a lean and mean dev machine) so having 320 GB is like winning the lottery and not knowing what to do with all this space. I figured I'd give a little extra space to the OS/FS partition since I'll be running JEE containers locally and doing a lot of file IO, logging and other memory-instensive operations. Any issues, problems, concerns, suggestions? (4) I was thinking about using ext4; seems to have good filestamping without any space ceiling for me to hit. Any other suggestions for a dev machine? (5) I read somewhere that you need to be careful when you install software as the root user, but I can't remember why. What general caveats do I need to be aware of when doing things (installing packages, making system configurations, etc.) as root vs "Development" user? Thanks!

    Read the article

  • Delphi TBytesField - How to see the text properly - Source is HIT OLEDB AS400

    - by myitanalyst
    We are connecting to a multi-member AS400 iSeries table via HIT OLEDB and HIT ODBC. You connect to this table via an alias to access a specific multi-member. We create the alias on the AS400 this way: CREATE ALIAS aliasname FOR table(membername) We can then query each member of the table this way: SELECT * FROM aliasname We are testing this in Delphi6 first, but will move it to D2010 later We are using HIT OLEDB for the AS400. We are pulling down records from a table and the field is being seen as a tBytesField. I have also tried ODBC driver and it sees as tBytesField as well. Directly on the AS400 I can query the data and see readable text. I can use the iSeries Navigation tool and see readable text as well. However when I bring it down to the Delphi client via the HIT OLEDB or HIT ODBC and try to view via asString then I just see unreadable text.. something like this: ñðð@ðõñððððñ÷@õôððõñòøóóöøñðÂÁÕÒ@ÖÆ@ÁÔÅÙÉÃÁ@@@@@@@@ÂÈÙÉâãæÁðòñè@ÔK@k@ÉÕÃK@@@@@@@@@ç I jumbled up the text above, but that is the character types that show up. When I did a test in D2010 the text looks like japanse or chinese characters, but if I display as AnsiString then it looks like what it does in Delphi 6. I am thinking this may have something to do with code pages or character sets, but I have no experience in this are so it is new to me if it is related. When I look at the Coded Character Set on the AS400 it is set to 65535. What do I need to do to make this text readable? We do have a third party component (Delphi400) that makes things behave in a more native AS400 manner. When I use its AS400 connection and AS400 query components it shows the field as a tStringField and displays just fine. BUT we are phasing out this product (for a number of reasons) and would really like the OLEDB with the ADO components work. Just for clarification the HIT OLEDB with tADOQuery do have some fields showing as tStringFields for many of the other tables we use... not sure why it is showing as a tBytesField in this case. I am not an AS400 expert, but looking at the field definititions on the AS400 the ones showing up as tBytesField look the same as the ones showing up as tStringFields... but there must be a difference. Maybe due to being a multi-member? So... does anyone have any guidance on how to get the correct string data that is readable? If you need more info please ask. Greg

    Read the article

  • CQRS - Should a Command try to create a "complex" master-detail entity?

    - by Simon Crabtree
    I've been reading Greg Young and Udi Dahan's thoughts on Command Query Responsibilty Separation and a lot of what I read strikes a chord with me. My domain (we track vehicles which are doing deliveries) has the concept of a Route which contains one or more Stops. I need my customers to be able to set these up in our system by calling a webservice, and then be able to retrieve information about a Route and how the vehicle is progressing. In the past I would have "cut-down" DTO classes which closely resemble my domain classes, and the customer would create a RouteDto with an array of StopDto(s), and call our CreateRoute webmethod, passing in the RouteDto. When they query our system by calling the GetRouteDetails method, I would return exactly the same objects to them. One of the appealing aspects of CQRS is that the RouteDto might have all manner of properties that the customer wants to query, but have no business setting when they create a Route. So I create a separate CreateRouteRequest class which is passed in when calling the CreateRoute "command", and a Route DTO class which gets returned as a query result. class Route{ string Reference; List<Stop> Stops; } But I need my customer to provide me with Route AND Stop details when they create a route. As I see it I could either... Give my CreateRouteRequest class a Stops(s) property which is an array of "something" representing the data they need to provide about each stop - but what do I call this class? It's not a Stop as that's what I'm calling the list of DTO inside my Route DTO, but I don't like "CreateStopRequest". I also wonder if I'm stuck in a CRUD mindset here thinking in terms of master-detail information and asking the customer to think like that too. class CreateRouteRequest{ string Reference; ... List<CreateStopRequest> Stops; } or They call CreateRoute, and then make a number of calls to an AddStopToRoute method. This feels a bit more "behavioural" but I'm going to lose the ability to treat creating a route including its stops as a single atomic command. If they create a Route and then try to add a Stop which fails due to some validation problem they're going to have a partially correct Route. The fact that I can't come up with a good name for the list of "StopCreationData" objects I'd be working with in option 1, makes me wonder if there's something I'm missing.

    Read the article

  • Productivity vs Security [closed]

    - by nerijus
    Really do not know is this right place to ask such a questions. But it is about programming in a different light. So, currently contracting with company witch pretends to be big corporation. Everyone is so important that all small issues like developers are ignored. Give you a sample: company VPN is configured so that if you have VPN then HTTP traffic is banned. Bearing this in mind can you imagine my workflow: Morning. Ok time to get latest source. Ups, no VPN. Let’s connect. Click-click. 3 sec. wait time. Ok getting source. Do I have emails? Ups. VPN is on, can’t check my emails. Need to wait for source to come up. Finally here it is! Ok Click-click VPN is gone. What is in my email. Someone reported a bug. Good, let’s track it down. It is in TFS already. Oh, dam, I need VPN. Click-click. Ok, there is description. Yea, I have seen this issue in stachoverflow.com. Let’s go there. Ups, no internet. Click-click. No internet. What? IPconfig… DHCP server kicked me out. Dam. Renew ip. 1..2..3. Ok internet is back. Google: site: stachoverflow.com 3 min. I have solution. Great I love stackoverflow.com. Don’t want to remember days where there was no stackoveflow.com. Ok. Copy paste this like to studio. Dam, studio is stalled, can’t reach files on TFS. Click-click. VPN is back. Get source out, paste my code. Grand. Let’s see what other comments about an issue in stackoverflow.com tells. Hmm.. There is a link. Click. Dammit! No internet. Click-click. No internet. DHCP kicked me out. Dammit. Now it is even worse: this happens 3-4 times a day. After certain amount of VPN connections open\closed my internet goes down solid. Only way to get internet back is reboot. All my browser tabs/SQL windows/studio will be gone. This happened just now when I am typing this. Back to issue I am solving right now: I am getting frustrated - I do not care about better solution for this issue. Let’s do it somehow and forget. This Click-click barrier between internet and TFS kills me… Sounds familiar? You could say there are VPN settings to change. No! This is company laptop, not allowed to do changes. I am very very lucky to have admin privileges on my machine. Most of developers don’t. So just learned to live with this frustration. It takes away 40-60 minutes daily. Tried to email company support, admins. They are too important ant too busy with something that just ignored my little man’s problem. Politely ignored. Question is: Is this normal in corporate world? (Have been in States, Canada, Germany. Never seen this.)

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

  • SQL Sentry First Impressions

    - by AjarnMark
    After struggling to defend my SQL Servers from a political attack recently, I realized that I needed better tools to back me up, and SQL Sentry is the leading candidate. A couple of weeks ago, seemingly from out of nowhere, complaints from the business users started coming in that one of the core internal applications was running dramatically slower than normal, and fingers were being pointed at the SQL Server.  Unfortunately, we don’t have a production DBA whose entire job is to monitor and maintain our SQL Servers.  The responsibility falls to me to do the best I can, investing only a small portion of my time, because there are so many other responsibilities to take care of, and our industry is still deep in recession.  I inherited these SQL Servers and have made significant improvements in process and procedure, but I had not yet made the time to take real baseline measurements or keep a really close eye on the performance.  Like many DBAs, I wrote several of my own tools and used the “built-in tools” like Profiler, PerfMon, and sp_who2 (did I mention most of our instances are SQL Server 2000?).  These have all served me well for in-the-moment troubleshooting and maintenance, but they really fell down on the job when I was called upon to “prove” that SQL Server performance was acceptable and more importantly had not degraded recently (i.e. historical comparisons).  I really didn’t have anything from a historical comparison perspective, but I was able to show that current performance was acceptable, and deflect attention back onto other components (which in fact turned out to be the real culprit). That experience dramatically illustrated the need for better monitoring tools.  Coincidentally, I had been talking recently to my boss about the mini nightmare of monitoring several critical and interdependent overnight jobs that operate on separate instances of SQL Server.  Among other tools, I had been using Idera’s SQL Job Manager which is a free tool and did a nice job of showing me job schedules and histories in a nice calendar view.  This worked fairly well, and for the money (did I mention it was free?) it couldn’t be beat.  But it is based on the stored job history in MSDB, and there were other performance problems that we ran into when we started changing the settings for how much job history to retain, in order to be able to look back a month or more in the calendar view.  Another coincidence (if you believe in such things) was that when we had some of those performance challenges, I posted a couple of questions to the #sqlhelp hashtag on Twitter and Greg Gonzalez (@SQLSensei) suggested I check out SQL Sentry’s Event Manager.  At the time, I just thought he worked there, but later found out that he founded the company.  When I took a quick look at the features & benefits, the one that really jumped out at me is Chaining and Queueing which sounded like it would really help with our “interdependent jobs on different servers” issue. I know that is a lot of background story and coincidences, but hopefully you have stuck with me so far, and now we have arrived at the point where last week I downloaded and installed the 30-day trial of the SQL Sentry Power Suite, which is Event Manager plus Performance Advisor.  And I must say that I really like what I see so far.  Here are a few highlights: Great Support.  I had two issues getting the trial setup and monitoring a handful of our servers.  One of which was entirely my fault (missed a security setting in SQL 2008) and the other was mostly my fault (late change to some config settings that were apparently cached and did not get refreshed properly).  In both cases, the support staff at SQL Sentry were very responsive and rather quickly figured out what the cause and fix was for each of them.  This left me with a great impression of the company.  Kudos to them! Chaining and Queueing.  While I have not yet activated this feature, I am very excited about the possibilities.  We have jobs on three different instances of SQL Server that have to be run in a certain order, and each has to finish before the next can successfully begin, and I believe this feature will ensure just that.  It has been a real pain in the backside when one of those jobs runs just a little too long and does not finish before the job on another instance starts, thus triggering a chain reaction of either outright job failures, or worse, successful completion of completely invalid processing. Calendar View.  I really, really like the Event Manager calendar view where I can see all jobs and events across all instances and identify potential resource contention as well as windows of opportunity for maintenance activity.  Very well done, and based on Event Manager’s own database of accumulated historical information rather than querying the source instances every time. Performance Advisor Dashboard History View.  This view let’s me quickly select a date and time range and it displays graphs of key SQL Server and Windows metrics.  This is exactly the thing I needed to answer the “has performance changed recently” question at the beginning of this post. Reporting Services Subscription Jobs with Report Name.  This was a big and VERY pleasant surprise.  If you have ever looked at the list of SQL Server jobs that SQL Server Reporting Services creates when you make a Subscription, you will notice that they all have some sort of GUID as the name of the job.  This is really ugly, and really annoying because when you are just looking at the SQL Agent and Job Activity Monitor, if you see that Job X failed, you really do not have any indication in the name or the properties of the Job itself, as to what Report that was for.  But with SQL Sentry Event Manager you do.  The Jobs list in the Navigator pane in SQL Sentry, amazingly, displays the name of the Report that the Subscription Job is for.  And when you open it to see more details, it shows you the full Reporting Services path to that Report, so you can immediately track it down in the Report Manager in case you want to identify/notify the owner or edit the Subscription information.  I did not expect this at all, but I sure do like it.  HOORAY! That is just my first impressions from using the tools for a few days.  And I haven’t even gotten into how it showed me where I was completely mistaken about one aspect of my SQL Server disk configurations.  I’ll share that lesson in another blog entry.  But I have to say it again, the combination of Event Manager and Performance Advisor working together have really made me a fan.

    Read the article

  • java 7 upgrade and hibernate annotation processor error

    - by Bill Turner
    I am getting the following warning, which seems to be triggering a subsequent warning and an error. I have been googling like mad, though have not found anything that makes it clear what it is I should do to resolve this. This issue occurs when I execute an Ant build. I am trying to migrate our project to Java 7. I have changed all the source='1.6' and target="1.6" to 1.7. I did find this related article: Forward compatible Java 6 annotation processor and SupportedSourceVersion It seems to indicate that I should build the Hibernate annotation processor jar myself, compiling it with with 1.7. It does not seem I should be required to do so. The latest version of the class in question (in hibernate-validator-annotation-processor-5.0.1.Final.jar) has been compiled with 1.6. Since the code in said class refers to SourceVersion.latestSupported(), and the 1.6 of that returns only RELEASE_6, there does not seem to be a generally available solution. Here is the warning: [javac] warning: Supported source version 'RELEASE_6' from annotation processor 'org.hibernate.validator.ap.ConstraintValidationProcessor' less than -source '1.7' And, here are the subsequent warnings/error. [javac] warning: No processor claimed any of these annotations: javax.persistence.PersistenceContext,javax.persistence.Column,org.codehaus.jackson.annotate.JsonIgnore,javax.persistence.Id,org.springframework.context.annotation.DependsOn,com.trgr.cobalt.infrastructure.datasource.Bucketed,org.codehaus.jackson.map.annotate.JsonDeserialize,javax.persistence.DiscriminatorColumn,com.trgr.cobalt.dataroom.authorization.secure.Secured,org.hibernate.annotations.GenericGenerator,javax.annotation.Resource,com.trgr.cobalt.infrastructure.spring.domain.DomainField,org.codehaus.jackson.annotate.JsonAutoDetect,javax.persistence.DiscriminatorValue,com.trgr.cobalt.dataroom.datasource.config.core.CoreTransactionMandatory,org.springframework.stereotype.Repository,javax.persistence.GeneratedValue,com.trgr.cobalt.dataroom.datasource.config.core.CoreTransactional,org.hibernate.annotations.Cascade,javax.persistence.Table,javax.persistence.Enumerated,org.hibernate.annotations.FilterDef,javax.persistence.OneToOne,com.trgr.cobalt.dataroom.datasource.config.core.CoreEntity,org.springframework.transaction.annotation.Transactional,com.trgr.cobalt.infrastructure.util.enums.EnumConversion,org.springframework.context.annotation.Configuration,com.trgr.cobalt.infrastructure.spring.domain.UpdatedFields,com.trgr.cobalt.infrastructure.spring.documentation.SampleValue,org.springframework.context.annotation.Bean,org.codehaus.jackson.annotate.JsonProperty,javax.persistence.Basic,org.codehaus.jackson.map.annotate.JsonSerialize,com.trgr.cobalt.infrastructure.spring.validation.Required,com.trgr.cobalt.dataroom.datasource.config.core.CoreTransactionNever,org.springframework.context.annotation.Profile,com.trgr.cobalt.infrastructure.spring.stereotype.Persistor,javax.persistence.Transient,com.trgr.cobalt.infrastructure.spring.validation.NotNull,javax.validation.constraints.Size,javax.persistence.Entity,javax.persistence.PrimaryKeyJoinColumn,org.hibernate.annotations.BatchSize,org.springframework.stereotype.Service,org.springframework.beans.factory.annotation.Value,javax.persistence.Inheritance [javac] error: warnings found and -Werror specified TIA!

    Read the article

  • Error running java -jar command

    - by dmantamp
    Hi all, I created a jar file using the following ANT script <manifestclasspath property="jar.classpath" jarfile="${bin.dir}/${jar.app.name}" maxparentlevels="0"> <classpath refid="main.class.path" /> </manifestclasspath> <target name="jar"> <mkdir dir="${build.dir}/lib/isp"/> <mkdir dir="${build.dir}/lib/jasper"/> <copy todir="${build.dir}/lib/jasper"> <fileset dir="${lib.jasper.dir}"> <include name="**/*.jar" /> </fileset> </copy> <copy todir="${build.dir}/lib/isp"> <fileset dir="${lib.isp.dir}"> <include name="**/*.jar" /> </fileset> </copy> <jar jarfile="${bin.dir}/${jar.app.name}" index="true" basedir="${classes.dir}" excludes="lib/mytest.jar " > <manifest> <attribute name="Main-Class" value="${main.class}" /> <attribute name="Class-Path" value="${jar.classpath}" /> </manifest> </jar> </target> The resulting jar file has the following MANIFEST.MF entry. Main-Class: dm.jb.Main Class-Path: lib/isp/OfficeLnFs_2.2.jar lib/isp/RXTXcomm.jar lib/isp/ba rbecue-1.0.6d.jar lib/isp/commons-logging-1.1.jar lib/isp/forms-1.0.5 .jar lib/isp/gnujaxp.jar lib/isp/helpUI.jar lib/isp/inspInstaller.jar lib/isp/itext-2.0.1.jar lib/isp/itext-2.0.2.jar lib/isp/jcalendar-1. 3.2.jar lib/isp/jcl.jar lib/isp/jcommon-1.0.10.jar lib/isp/jcommon-1. 0.9.jar lib/isp/jdnc-0_7-all.jar lib/isp/jdnc-runner.jar lib/isp/jdom .jar lib/isp/jfreechart-1.0.6.jar lib/isp/jlfgr-1_0.jar lib/isp/junit .jar lib/isp/log4j-1.2.9.jar lib/isp/looks-1.3.2.jar lib/isp/msbase.j ar lib/isp/mssqlserver.jar lib/isp/msutil.jar lib/isp/mysql-connector When I try to run the command java -jar mytest.jar, it fails and throws error saying dm.jb.Main not found. But I could run the class by specifying the classpath java -classpath dm.jb.Main Please help me DM

    Read the article

  • mySQL :syntax using in C#

    - by Meko
    Hi. I am using in C# MYsql .I have query that works if I run on MySql Workbench ,but in C# it does not return any value also does not give ant error too.There is only one different using on Mysql I use before table name databaseName.tableName , but in C# I think it doesn`t necessary. This is part of query which does not return anything. "(select Lesson_Name from schedule where Group_NO = (select Group_NO from sinif inner join student ON sinif.Group_ID=student.Group_ID where Student_Name=(?Student))"+ " And Day_Name =(select Day_Name from day inner join date ON day.Day_ID=date.DayName where Date=(?Date))" + "And Lesson_Time= (select Lesson_Time from clock where Lesson_Time <= (?Time)order by Lesson_Time DESC limit 0, 1) " + " And Week_NO = (select Week_NO from week inner join date ON week.Week_ID=date.Week_ID where Date=(?Date))) And Here all codes which executes when user click button. private void check_B_Click(object sender, EventArgs e) { connection.Open(); for (int i = 0; i < existingStudents.Count; i++) { MySqlCommand cmd1 = new MySqlCommand("select Student_Name,Student_Surname,Student_MacAddress from student ", connection); MySqlCommand cmd2 = new MySqlCommand("insert into check_list (Student,Mac_Address,Date,Time,Lesson_Name)"+ "values((?Student),(?MacAddress),(?Date),(?Time),"+ "(select Lesson_Name from schedule where Group_NO = (select Group_NO from sinif inner join student ON sinif.Group_ID=student.Group_ID where Student_Name=(?Student))"+ " And Day_Name =(select Day_Name from day inner join date ON day.Day_ID=date.DayName where Date=(?Date))" + "And Lesson_Time= (select Lesson_Time from clock where Lesson_Time <= (?Time)order by Lesson_Time DESC limit 0, 1) " + " And Week_NO = (select Week_NO from week inner join date ON week.Week_ID=date.Week_ID where Date=(?Date))))", connection); MySqlParameter param1 = new MySqlParameter(); param1.ParameterName = "?Student"; reader = cmd1.ExecuteReader(); if (reader.HasRows) while (reader.Read()) { if (reader["Student_MacAddress"].ToString() == existingStudentsMac[i].ToString()) param1.Value = reader["Student_Name" ]+" "+reader["Student_Surname"]; } reader.Close(); MySqlParameter param2 = new MySqlParameter(); param2.ParameterName = "?MacAddress"; param2.Value = existingStudentsMac[i]; MySqlParameter param3 = new MySqlParameter(); param3.ParameterName = "?Date"; param3.Value = DateTime.Today.Date; MySqlParameter param4 = new MySqlParameter(); param4.ParameterName = "?Time"; param4.Value = DateTime.Now.ToString("HH:mm"); cmd2.Parameters.Add(param1); cmd2.Parameters.Add(param2); cmd2.Parameters.Add(param3); cmd2.Parameters.Add(param4); cmd2.ExecuteNonQuery(); } connection.Close(); MessageBox.Show("Sucsess :)"); }

    Read the article

  • Proxy settings with ivy...

    - by user315228
    Hi, I have an issue where in I have defined dependancies in ivy.xml on our internal corporate svn. I am able to access this svn site without any proxy task in ant. While my dependencies resides on ibiblio, that’s something outside our corporate, and needs proxy inorder to download something. I am facing problem using ivy here: I have following in build.xml <target name="proxy" <property name="proxy.host" value="xyz.proxy.net"/ <property name="proxy.port" value="8443"/ <setproxy proxyhost="${proxy.host}" proxyport="${proxy.port}"/ </target <!-- resolve the dependencies of stratus --> <target name="resolveTestDependency" depends="testResolve, proxy" description="retrieve test dependencies with ivy"> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:retrieve conf="test" pattern="${jars}/[artifact]-[revision].[ext]"/><!--pattern here specifies where do you want to download lib to?--> </target> <target name=" testResolve "> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:resolve conf="test" file="stratus-ivy.xml"/> </target> Following is the excerpt from stratus-ivysettings.xml <resolvers <!-- here you define your file in private machine not on the repo (e.g. jPricer.jar or edgApi.jar)-- <!-- This we will use a url nd not local file system.. -- <url name="privateFS" <ivy pattern="http://xyz.svn.com/ivyRepository/ [organisation]/ivy/ivy.xml"/ </url . . . <url name="public" m2compatible="true" <artifact pattern="http://www.ibiblio.org/maven2/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/ </url . . . So as can be seen here for getting ivy.xml, I don’t need any proxy as its within our own network which cant be accesses when I set proxy. But on the other hand I am using ibiblio as well which is external to our network and works only with proxy. So above build.xml wont work in that case. Can somebody help here. I don’t need proxy while getting ivy.xml (as if I have proxy, ivy wont be able to find ivy file behind proxy from within the network), and I just need it when my resolver goes to public url.

    Read the article

  • Maven best practice for generating multiple jars with different/filtered classes ?

    - by jaguard
    I developed a Java utility library (similarly to Apache Commons) that I use in various projects. Additionally to fat clients I also use it for mobile clients (PDA with J9 Foundation profile). In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality but not really needed in all the projects. Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars Currently in the projects that area using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the generated jar size constraints for the mobile projects. Migrating now to Maven I just wonder if there are any best practices to arrive to something similar so I consider the following scenarios: [1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes ... I'm not very comfortable with this approach since it pollute the library with library consumers demands (filters) [2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach my look problematic since even that will let "my-lib" project intact (with no additional classifiers & artifacts), basically will double the number of projects since for every client project I'll have one just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency) [3] - Into the every client project that use the library (ex: my-pda-app) add some specialized Maven plugins to trim - out (when generating the final artifact/package) the un-needed classes (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin) What is the best practice for solving this kind of problems in the "Maven way" ?!

    Read the article

  • java.lang.ClassNotFoundException: org.apache.struts2.dispatcher.FilterDispatcher

    - by user714698
    log when start the tomcat Apr 28, 2011 10:52:57 AM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: D:\software\jdk1.5.0_06\bin;.;C:\WINDOWS\system32;C:\WINDOWS;D:/software/jdk1.5.0_06/bin/../jre/bin/client;D:/software/jdk1.5.0_06/bin/../jre/bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\TortoiseSVN\bin.;D:\software\jdk1.5.0_06\bin;D:\software\Ant 1.7\bin;D:\software\Axis2-1.5.4\axis2-1.5.4-bin\axis2-1.5.4\bin;C:\Program Files\IDM Computer Solutions\UltraEdit\ Apr 28, 2011 10:52:58 AM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.jee.server:StrutsHelloWorld' did not find a matching property. Apr 28, 2011 10:52:58 AM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.jee.server:StrutsHelloWorld1' did not find a matching property. Apr 28, 2011 10:53:00 AM org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Apr 28, 2011 10:53:00 AM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 3657 ms Apr 28, 2011 10:53:00 AM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Apr 28, 2011 10:53:00 AM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.32 Apr 28, 2011 10:53:01 AM org.apache.catalina.core.StandardContext filterStart SEVERE: Exception starting filter struts2 java.lang.ClassNotFoundException: org.apache.struts2.dispatcher.FilterDispatcher at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1680) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:269) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:422) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:115) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4071) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4725) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:840) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at org.apache.catalina.core.StandardService.start(StandardService.java:525) at org.apache.catalina.core.StandardServer.start(StandardServer.java:754) at org.apache.catalina.startup.Catalina.start(Catalina.java:595) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Apr 28, 2011 10:53:01 AM org.apache.catalina.core.StandardContext start SEVERE: Error filterStart Apr 28, 2011 10:53:01 AM org.apache.catalina.core.StandardContext start SEVERE: Context [/StrutsHelloWorld1] startup failed due to previous errors log4j:WARN No appenders could be found for logger (com.opensymphony.xwork2.config.providers.XmlConfigurationProvider). log4j:WARN Please initialize the log4j system properly. Apr 28, 2011 10:53:05 AM org.apache.coyote.http11.Http11Protocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Apr 28, 2011 10:53:05 AM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Apr 28, 2011 10:53:05 AM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/46 config=null Apr 28, 2011 10:53:05 AM org.apache.catalina.startup.Catalina start INFO: Server startup in 4951 ms

    Read the article

  • PL/SQL pre-compile and Code Quality checks in an automatted build environment?

    - by Lars Corneliussen
    We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. Requirements There are a couple of things we wan't to test in the following priority: Are the sources valid, hence "compilable"? For packages, with respect to a certain database, would they compile? Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, the tool must run head-less (commandline, ant, ...) we wan't to do analysis on a partial code base (changed sources only) Tools We did a little research and found the following tools that could potencially help: Cast Application Intelligence Platform (AIP): Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. Toad for Oracle: The Professional version is said to include something called Xpert validates a set of rules against a code base. Sonar + PL/SQL-Plugin: Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. Semantic Designs DMSToolkit: Quite general analysis of source code base. Commandline available? Semantic Designs Clones Detector: Detects clones. But also via command line? Fortify Source Code Analyzer: Seems to be focussed on security issues. But maybe it is extensible? more... So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? Related Questions on SO: http://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures http://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql http://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql

    Read the article

  • What facets have I missed for creating a 3 person guerilla dev team?

    - by Penguinix
    Sorry for the Windows developers out there, this solution is for Macs only. This set of applications accounts for: Usability Testing, Screen Capture (Video and Still), Version Control, Task Lists, Bug Tracking, a Developer IDE, a Web Server, A Blog, Shared Doc Editing on the Web, Team and individual Chat, Email, Databases and Continuous Integration. This does assume your team members provide their own machines, and one person has a spare old computer to be the Source Repository and Web Server. All for under $200 bucks. Usability Silverback Licenses = 3 x $49.95 "Spontaneous, unobtrusive usability testing software for designers and developers." Source Control Server and Clients (multiple options) Subversion = Free Subversion is an open source version control system. Versions (Currently in Beta) = Free Versions provides a pleasant work with Subversion on your Mac. Diffly = Free "Diffly is a tool for exploring Subversion working copies. It shows all files with changes and, clicking on a file, shows a highlighted view of the changes for that file. When you are ready to commit Diffly makes it easy to select the files you want to check-in and assemble a useful commit message." Bug/Feature/Defect Tracking (multiple options) Bugzilla = Free Bugzilla is a "Defect Tracking System" or "Bug-Tracking System". Defect Tracking Systems allow individual or groups of developers to keep track of outstanding bugs in their product effectively. Most commercial defect-tracking software vendors charge enormous licensing fees. Trac = Free Trac is an enhanced wiki and issue tracking system for software development projects. Database Server & Clients MySQL = Free CocoaMySQL = Free Web Server Apache = Free Development and Build Tools XCode = Free CruiseControl = Free CruiseControl is a framework for a continuous build process. It includes, but is not limited to, plugins for email notification, Ant, and various source control tools. A web interface is provided to view the details of the current and previous builds. Collaboration Tools Writeboard = Free Ta-da List = Free Campfire Chat for 4 users = Free WordPress = Free "WordPress is a state-of-the-art publishing platform with a focus on aesthetics, web standards, and usability. WordPress is both free and priceless at the same time." Gmail = Free "Gmail is a new kind of webmail, built on the idea that email can be more intuitive, efficient, and useful." Screen Capture (Video / Still) Jing = Free "The concept of Jing is the always-ready program that instantly captures and shares images and video…from your computer to anywhere." Lots of great responses: TeamCity [Yo|||] Skype [Eric DeLabar] FogBugz [chakrit] IChatAV and Screen Sharing (built-in to OS) [amrox] Google Docs [amrox]

    Read the article

  • What's the best Scala build system?

    - by gatoatigrado
    I've seen questions about IDE's here -- Which is the best IDE for Scala development? and What is the current state of tooling for Scala?, but I've had mixed experiences with IDEs. Right now, I'm using the Eclipse IDE with the automatic workspace refresh option, and KDE 4's Kate as my text editor. Here are some of the problems I'd like to solve: use my own editor IDEs are really geared at everyone using their components. I like Kate better, but the refresh system is very annoying (it doesn't use inotify, rather, maybe a 10s polling interval). The reason I don't use the built-in text editor is because broken auto-complete functionalities cause the IDE to hang for maybe 10s. rebuild only modified files The Eclipse build system is broken. It doesn't know when to rebuild classes. I find myself almost half of the time going to project-clean. Worse, it seems even after it has finished building my project, a few minutes later it will pop up with some bizarre error (edit - these errors appear to be things that were previously solved with a project clean, but then come back up...). Finally, setting "Preferences / Continue launch if project contains errors" to "prompt" seems to have no effect for Scala projects (i.e. it always launches even if there are errors). build customization I can use the "nightly" release, but I'll want to modify and use my own Scala builds, not the compiler that's built into the IDE's plugin. It would also be nice to pass [e.g.] -Xprint:jvm to the compiler (to print out lowered code). fast compiling Though Eclipse doesn't always build right, it does seem snappy -- even more so than fsc. I looked at Ant and Maven, though haven't employed either yet (I'll also need to spend time solving #3 and #4). I wanted to see if anyone has other suggestions before I spend time getting a suboptimal build system working. Thanks in advance! UPDATE - I'm now using Maven, passing a project as a compiler plugin to it. It seems fast enough; I'm not sure what kind of jar caching Maven does. A current repository for Scala 2.8.0 is available [link]. The archetypes are very cool, and cross-platform support seems very good. However, about compile issues, I'm not sure if fsc is actually fixed, or my project is stable enough (e.g. class names aren't changing) -- running it manually doesn't bother me as much. If you'd like to see an example, feel free to browse the pom.xml files I'm using [github]. UPDATE 2 - from benchmarks I've seen, Daniel Spiewak is right that buildr's faster than Maven (and, if one is doing incremental changes, Maven's 10 second latency gets annoying), so if one can craft a compatible build file, then it's probably worth it...

    Read the article

  • Android: Creating custom class of resources

    - by Sebastian
    Hi, R class on android has it's limitations. You can't use the resources dynamically for loading audio, pictures or whatever. If you wan't for example, load a set of audio files for a choosen object you can't do something like: R.raw."string-upon-choosen-object" I'm new to android and at least I didn't find how you could do that, depending on what objects are choosen or something more dynamic than that. So, I thought about making it dynamic with a little of memory overhead. But, I'm in doubt if it's worth it or just working different with external resources. The idea is this: Modify the ant build xml to execute my own task. This task, is a java program that parses the R.java file building a set of HashMaps with it's pair (key, value). I have done this manually and It's working good. So I need some experts voice about it. This is how I will manage the whole thing: Generate a base Application class, e.g. MainApplicationResources that builds up all the require methods and attributes. Then, you can access those methods invoking getApplication() and then the desired method. Something like this: package [packageName] import android.app.Application; import java.util.HashMap; public class MainActivityResources extends Application { private HashMap<String,Integer> [resNameObj1]; private HashMap<String,Integer> [resNameObj2]; ... private HashMap<String,Integer> [resNameObjN]; public MainActivityResources() { super(); [resNameObj1] = new HashMap<String,Integer>(); [resNameObj1].put("[resNameObj1_Key1]", new Integer([resNameObj1_Value1])); [resNameObj1].put("[resNameObj1_Key2]", new Integer([resNameObj1_Value2])); [resNameObj2] = new HashMap<String,Integer>(); [resNameObj2].put("[resNameObj2_Key1]", new Integer([resNameObj2_Value1])); [resNameObj2].put("[resNameObj2_Key2]", new Integer([resNameObj2_Value2])); ... [resNameObjN] = new HashMap<String,Integer>(); [resNameObjN].put("[resNameObjN_Key1]", new Integer([resNameObjN_Value1])); [resNameObjN].put("[resNameObjN_Key2]", new Integer([resNameObjN_Value2])); } public int get[ResNameObj1](String resourceName) { return [resNameObj1].get(resourceName).intValue(); } public int get[ResNameObj2](String resourceName) { return [resNameObj2].get(resourceName).intValue(); } ... public int get[ResNameObjN](String resourceName) { return [resNameObjN].get(resourceName).intValue(); } } The question is: Will I add too much memory use of the device? Is it worth it? Regards,

    Read the article

  • Zero code coverage with cobertura 1.9.2 but tests are working

    - by eraonel
    I run the code coverage target: <junit fork="yes" dir="${basedir}" failureProperty="test.failed"> <!-- Note the classpath order: instrumented classes are before the original (uninstrumented) classes. This is important. --> <classpath path="${instrumented.dir}" /> <classpath path="${classes.dir}" /> <classpath refid="classpath" /> <!-- The instrumented classes reference classes used by the Cobertura runtime, so Cobertura and its dependencies must be on your classpath. --> <classpath refid="cobertura.classpath" /> <formatter type="xml" /> <!--<test name="${testcase}" todir="${reports.xml.dir}" if="testcase" />--> <batchtest fork="yes" todir="${reports.xml.dir}"> <fileset dir="${classes.dir}"> <include name="**/generated/AllTests.class" /> </fileset> </batchtest> </junit> <junitreport todir="${reports.xml.dir}"> <fileset dir="${reports.xml.dir}"> <include name="TEST-*.xml" /> </fileset> <report format="frames" todir="${reports.html.dir}" /> </junitreport> Then I get the following output ( when using fork="true"): java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at net.sourceforge.cobertura.util.FileLocker.lock(FileLocker.java:124) at net.sourceforge.cobertura.coveragedata.ProjectData.saveGlobalProjectData(ProjectData.java:331) at net.sourceforge.cobertura.coveragedata.SaveTimer.run(SaveTimer.java:31) at java.lang.Thread.run(Thread.java:595) Caused by: java.io.IOException: No locks available at sun.nio.ch.FileChannelImpl.lock0(Native Method) at sun.nio.ch.FileChannelImpl.lock(FileChannelImpl.java:784) at java.nio.channels.FileChannel.lock(FileChannel.java:865) ... 8 more --------------------------------------- Unable to get lock on /vobs/rnc/rrt/roam2/roamSs/RoamMao_swb/RoamMao_bldu/ant_build/cobertura.ser.lock: null This is known to happen on Linux kernel 2.6.20. Make sure cobertura.jar is in the root classpath of the jvm process running the instrumented code. If the instrumented code is running in a web server, this means cobertura.jar should be in the web server's lib directory. Don't put multiple copies of cobertura.jar in different WEB-INF/lib directories. Only one classloader should load cobertura. It should be the root classloader. I am using Ant 1.7.0 and cobertura 1.9.2. Any ideas why there is no coverage? Test run ok as I see in my target. I have tried to switch java versions ( 1.5.0_06 and 1.6.0_10) but no difference.

    Read the article

  • Implementing default constructors

    - by James
    Implement the default constructor, the constructors with one and two int parameters. The one-parameter constructor should initialize the first member of the pair, the second member of the pair is to be 0. Overload binary operator + to add the pairs as follows: (a, b) + (c, d) = (a + c, b + d); Overload the - analogously. Overload the * on pairs ant int as follows: (a, b) * c = (a * c, b * c). Write a program to test all the member functions and overloaded operators in your class definition. You will also need to write accessor (get) functions for each member. The definition of the class Pairs: class Pairs { public: Pairs(); Pairs(int first, int second); Pairs(int first); // other members and friends friend istream& operator>> (istream&, Pair&); friend ostream& operator<< (ostream&, const Pair&); private: int f; int s; }; Self-Test Exercise #17: istream& operator (istream& ins, Pair& second) { char ch; ins ch; // discard init '(' ins second.f; ins ch; // discard comma ',' ins second.s; ins ch; // discard final '(' return ins; } ostream& operator<< (ostream& outs, const Pair& second) { outs << '('; outs << second.f; outs << ", " ;// I followed the Author's suggestion here. outs << second.s; outs << ")"; return outs; }

    Read the article

  • How to create multiboot flash drive

    - by Nrew
    I've found a guide here: http://www.pendrivelinux.com/boot-multiple-iso-from-usb-multiboot-usb/ And found this menu.lst in my flash drive, which seems to be the one that I'm seeing when I boot using my flash drive: # This Menu Created by Lance http://www.pendrivelinux.com # Ongoing Suggested Menu Entries and the Suggestor are noted! default 0 timeout 30 color NORMAL HIGHLIGHT HELPTEXT HEADING splashimage=(hd0,0)/splash.xpm.gz foreground=FFFFFF background=0066FF title Memtest86+ find --set-root /memtest86+-4.00.iso map --mem /memtest86+-4.00.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by madprofessor title Boot Clonezilla root (hd0,0) kernel /clonezilla/live/vmlinuz live-media-path=clonezilla/live bootfrom=/dev/sd boot=live union=aufs noprompt ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" ocs_lang="" vga=791 ip=frommedia initrd /clonezilla/live/initrd.img title Parted Magic 4.9 (Partition Tools) find --set-root /pmagic-4.9.iso map /pmagic-4.9.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by Deb title Partition Wizard 4.2 (Partition Tools) find --set-root /pwhe42.iso map /pwhe42.iso (hd32) map --hook root (hd32) chainloader (hd32) title Balder DOS image (FreeDOS) map --unsafe-boot /balder10.img (fd0) map --hook chainloader --force (fd0)+1 rootnoverify (fd0) # Suggested by Szymon Silski title Linux Mint 8 find --set-root /LinuxMint-8.iso map /LinuxMint-8.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/mint.seed boot=casper persistent iso-scan/filename=/LinuxMint-8.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 find --set-root /ubuntu-10.04-desktop-i386.iso map /ubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz title Xubuntu 10.04 (XFCE Desktop) find --set-root /xubuntu-10.04-desktop-i386.iso map /xubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/xubuntu.seed boot=casper persistent iso-scan/filename=/xubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz title Kubuntu 10.04 (KDE Desktop) find --set-root /kubuntu-10.04-desktop-i386.iso map /kubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/kubuntu.seed boot=casper persistent iso-scan/filename=/kubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz # Suggested by Ambriel title Lubuntu 10.04 (LXDE Lightweight Desktop) find --set-root /lubuntu-10.04.iso map /lubuntu-10.04.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/lubuntu.seed boot=casper persistent iso-scan/filename=/lubuntu-10.04.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 Netbook Remix (NetBook Distro) find --set-root /ubuntu-10.04-netbook-i386.iso map /ubuntu-10.04-netbook-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/netbook-remix.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-netbook-i386.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 Server Edition Installer (32 bit Installer Only) find --set-root /ubuntu-10.04-server-i386.iso map /ubuntu-10.04-server-i386.iso (0xff) map --hook root (0xff) kernel /install/vmlinuz file=/cdrom/preseed/ubuntu-server.seed boot=install iso-scan/filename=/ubuntu-10.04-server-i386.iso splash initrd /install/initrd.gz title Ubuntu 9.10 find --set-root /ubuntu-9.10-desktop-i386.iso map /ubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz title Xubuntu 9.10 find --set-root /xubuntu-9.10-desktop-i386.iso map /xubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/xubuntu.seed boot=casper persistent iso-scan/filename=/xubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz title Kubuntu 9.10 find --set-root /kubuntu-9.10-desktop-i386.iso map /kubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/kubuntu.seed boot=casper persistent iso-scan/filename=/kubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz # Ubuntu Server and Netbook Remix suggested by Wojciech Holek title Ubuntu 9.10 Server Edition Installer (Installer Only) find --set-root /ubuntu-9.10-server-i386.iso map /ubuntu-9.10-server-i386.iso (0xff) map --hook root (0xff) kernel /install/vmlinuz file=/cdrom/preseed/ubuntu-server.seed boot=install iso-scan/filename=/ubuntu-9.10-server-i386.iso splash initrd /install/initrd.gz title Ubuntu 9.10 Netbook Remix (NetBook Distro) find --set-root /ubuntu-9.10-netbook-remix-i386.iso map /ubuntu-9.10-netbook-remix-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/netbook-remix.seed boot=casper persistent iso-scan/filename=/ubuntu-9.10-netbook-remix-i386.iso splash initrd /casper/initrd.lz title Ubuntu 9.10 Rescue Remix (Recovery Tools) find --set-root /ubuntu-rescue-remix-9-10-revision1.iso map /ubuntu-rescue-remix-9-10-revision1.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper iso-scan/filename=/ubuntu-rescue-remix-9-10-revision1.iso splash initrd /casper/initrd.lz title DSL 4.4.10 find --set-root /dsl-4.4.10-initrd.iso map --mem /dsl-4.4.10-initrd.iso (hd32) map --hook root (hd32) chainloader (hd32) title AVG Rescue CD (Anti-Virus + Anti-Spyware) find --set-root /avg_arl_en_90_100114.iso map /avg_arl_en_90_100114.iso (hd32) map --hook chainloader (hd32) title Ultimate Boot CD 4.11 find --set-root /ubcd411.iso map /ubcd411.iso (hd32) map --hook chainloader (hd32) title OphCrack XP 2.3.1 (XP Password Cracker) find --set-root /ophcrack-xp-livecd-2.3.1.iso map /ophcrack-xp-livecd-2.3.1.iso (0xff) map --hook root (0xff) kernel /boot/bzImage rw root=/dev/null vga=normal lang=C kmap=us screen=1024x768x16 autologin initrd /boot/rootfs.gz title OphCrack Vista 2.3.1 (Vista Password Cracker) find --set-root /ophcrack-vista-livecd-2.3.1.iso map /ophcrack-vista-livecd-2.3.1.iso (0xff) map --hook root (0xff) kernel /boot/bzImage rw root=/dev/null vga=normal lang=C kmap=us screen=1024x768x16 autologin initrd /boot/rootfs.gz # Suggested by Greg Steer title Offline NT Password & Registy Editor find --set-root /cd080802.iso map /cd080802.iso (hd32) map --hook chainloader (hd32) title SliTaz 2.0 find --set-root /slitaz-2.0.iso map --mem /slitaz-2.0.iso (hd32) map --hook chainloader (hd32) title Riplinux 9.3 find --set-root /RIPLinuX-9.3.iso map --heads=0 --sectors-per-track=0 /RIPLinuX-9.3.iso (0xff) || map --heads=0 --sectors-per-track=0 --mem /RIPLinuX-9.3.iso (0xff) map --hook chainloader (0xff) # Suggested by Sunny title YlmF (Windows Like OS) find --set-root /YlmF_OS_EN_v1.0.iso map /YlmF_OS_EN_v1.0.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/YlmF_OS_EN_v1.0.iso splash initrd /casper/initrd.lz # Suggested by Martin Andersson title DBAN 1.0.7 (Drive Nuker) find --set-root /dban-1.0.7_i386.iso map --mem /dban-1.0.7_i386.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by Robin McGough title xPUD 0.9.2 (NetBook Distro) find --set-root --ignore-floppies --ignore-cd /xpud-0.9.2.iso map --heads=0 --sectors-per-track=0 /xpud-0.9.2.iso (hd32) map --hook chainloader (hd32) title Puppy 4.3.1 find --set-root /puppy/pup-431.sfs kernel /puppy/vmlinuz initrd /puppy/initrd.gz # Suggested by Relst title Run a Linux OS from the Internet kernel /gpxe.lkrn I also put some .iso files for os installers (Windows xp sp2 and Ubuntu 10.04) But they didn't show up in the list when I booted Do I need to: extract the .iso files and put in in their respective folders? Add the os that I added on the menu.lst? How do I add the iso image(os) in the menu.lst? Before adding the .iso files I first made a folder named Windows xp sp2 then placed the .iso files in there. Please help, I think I need to add the folder name or the file name on the menu.lst but I don't know how

    Read the article

< Previous Page | 64 65 66 67 68 69  | Next Page >