Search Results

Search found 365 results on 15 pages for 'chuck speaks'.

Page 12/15 | < Previous Page | 8 9 10 11 12 13 14 15  | Next Page >

  • Oracle Tutor: Installing Is Not Implementing or Why CIO's should care about End User Adoption

    - by emily.chorba(at)oracle.com
    Eighteen months ago I showed Tutor and UPK Productive Day One overview to a CIO friend of mine. He works in a manufacturing business which had been recently purchased by a global conglomerate. He had a major implementation coming up, but said that the corporate team would be coming in to handle the project. I asked about their end user training approach, but it was unclear to him at the time. We were in touch over the course of the implementation project. The major activities were data conversion, how-to workshops, General Ledger realignment, and report definition. The message was "Here's how we do it at corporate, and here's how you are going to do it." In short, it was an application software installation. The corporate team had experience and confidence and the effort through go-live was smooth. Some weeks after cutover, problems with customer orders began to surface. Orders could not be fulfilled in a timely fashion. The problem got worse, and the corporate emergency team was called in. After many days of analysis, the issue was tracked down and resolved, but by then there were weeks of backorders, and their customer base was impacted in a significant way. It took three months of constant handholding of customers by the sales force for good will to be reestablished, and this itself diminished a new product sales push. I learned of these results in a recent conversation with the CIO. I asked him what the solution to the problem was, and he replied that it was twofold. The first component was a lack of understanding by customer service reps about how a particular data item in order entry was to be filled in, resulting in discrepant order data. The second component was that product planners were using this data, along with data from other sources, to fill in a spreadsheet based on the abandoned system. This spreadsheet was the primary input for planning data. The result of these two inaccuracies was that key parts were not being ordered to effectively meet demand and the lead time for finished goods was pushed out by weeks. I reminded him about the Productive Day One approach, and it's focus on methodology and tools for end user training. A more collaborative solution workshop would have identified proper applications use in the new environment. Using UPK to document correct transaction entry would have provided effective guidelines to the CSRs for data entry. Using Oracle Tutor to document the manual tasks would have eliminated the use of an out of date spreadsheet. As we talked this over, he said, "I wish I knew when I started what I know now." Effective end user adoption is the most critical and most overlooked success factor in applications implementations. When the switch is thrown at go-live, employees need to know how to use the new systems to do their jobs. Their jobs are made up of manual steps and systems steps which must be performed in the right order for the implementing organization to operate smoothly. Use Tutor to document the manual policies and procedures, use UPK to document the systems tasks, and develop this documentation in conjunction with a solution workshop. This is the path to develop effective end user training material for a smooth implementation. Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Chuck Jones, Product Manager, Oracle Tutor and BPM

    Read the article

  • Send raw data to USB parallel port after upgrading to 11.10

    - by zaphod
    I have a laser cutter connected via a generic USB to parallel adapter. The laser cutter speaks HPGL, as it happens, but since this is a laser cutter and not a plotter, I usually want to generate the HPGL myself, since I care about the ordering, speed, and direction of cuts and so on. In previous versions of Ubuntu, I was able to print to the cutter by copying an HPGL file directly to the corresponding USB "lp" device. For example: cp foo.plt /dev/usblp1 Well, I just upgraded to Ubuntu 11.10 oneiric, and I can't find any "lp" devices in /dev anymore. D'oh! What's the preferred way to send raw data to a parallel port in Ubuntu? I've tried System Settings Printing + Add, hoping that I might be able to associate my device with some kind of "raw printer" driver and print to it with a command like lp -d LaserCutter foo.plt But my USB to parallel adapter doesn't seem to show up in the list. What I do see are my HP Color LaserJet, two USB-to-serial adapters, "Enter URI", and "Network Printer". Meanwhile, over in /dev, I do see /dev/ttyUSB0 and /dev/ttyUSB1 devices for the 2 USB-to-serial adapters. I don't see anything obvious corresponding to the HP printer (which was /dev/usblp0 prior to the upgrade), except for generic USB stuff. For example, sudo find /dev | grep lp produces no output. I do seem to be able to print to the HP printer just fine, though. The printer setup GUI gives it a device URI starting with "hp:" which isn't much help for the parallel adapter. The CUPS administrator's guide makes it sound like I might need to feed it a device URI of the form parallel:/dev/SOMETHING, but of course if I had a /dev/SOMETHING I'd probably just go on writing to it directly. Here's what dmesg says after I disconnect and reconnect the device from the USB port: [ 924.722906] usb 1-1.1.4: USB disconnect, device number 7 [ 959.993002] usb 1-1.1.4: new full speed USB device number 8 using ehci_hcd And here's how it shows up in lsusb -v: Bus 001 Device 008: ID 1a86:7584 QinHeng Electronics CH340S Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x1a86 QinHeng Electronics idProduct 0x7584 CH340S bcdDevice 2.52 iManufacturer 0 iProduct 2 USB2.0-Print iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 96mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Device Status: 0x0000 (Bus Powered)

    Read the article

  • What do you do when practical problems get in the way of practical goals?

    - by P.Brian.Mackey
    UPDATE Source control is good to use. Sometimes, real world issues make it impractical to use. For example: If the team is not used to using source control, training problems can arise If a team member directly modifies code on the server, various issues can arise. Merge problems, lack of history, etc Let's say there's a project that is way out of sync. The physical files on the server differ in unknown ways over ~100 files. Merging would take not only a great knowledge of the project, but is also well beyond the ability to complete in the given time. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying source control saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming, across ~10 projects. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremly difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • Maximizing the Value of Software

    - by David Dorf
    A few years ago we decided to increase our investments in documenting retail processes and architectures.  There were several goals but the main two were to help retailers maximize the value they derive from our software and help system integrators implement our software faster.  The sale is only part of our success metric -- its actually more important that the customer realize the benefits of the software.  That's when we actually celebrate. This week many of our customers are gathered in Chicago to discuss their successes during our annual Crosstalk conference.  That provides the perfect forum to announce the release of the Oracle Retail Reference Library.  The RRL is available for free to Oracle Retail customers and partners.  It contains 1000s of hours of work and represents years of experience in the retail industry.  The Retail Reference Library is composed of three offerings: Retail Reference Model We've been sharing the RRM for several years now, with lots of accolades.  The RRM is a set of business process diagrams at varying levels of granularity. This release marks the debut of Visio documents, which should make it easier for retailers to adopt and edit the diagrams.  The processes represent an approximation of the Oracle Retail software, but at higher levels they are pretty generic and therefore usable with other software as well.  Using these processes, the business and IT are better able to communicate the expectations of the software.  They can be used to guide customization when necessary, and help identify areas for optimization in the organization. Retail Reference Architecture When embarking on a software implementation project, it can be daunting to start from a blank sheet of paper.  So we offer the RRA, a comprehensive set of documents that describe the retail enterprise in terms of logical architecture, physical deployments, and systems integration.  These documents and diagrams describe how all the systems typically found in a retailer enterprise work together.  They serve as a way to jump-start implementations using best practices we've captured over the years. Retail Semantic Glossary Have you ever seen two people argue over something because they're using misaligned terminology?  Its a huge waste and happens all the time.  The Retail Semantic Glossary is a simple application that allows retailers to define terms and metrics in a centralized database.  This initial version comes with limited content with the goal of adding more over subsequent releases.  This is the single source for defining key performance indicators, metrics, algorithms, and terms so that the retail organization speaks in a consistent language. These three offerings are downloaded from MyOracleSupport separately and linked together using the start page above.  Everything is navigated using a Web browser.  See the Oracle Retail Documentation blog for more details.

    Read the article

  • Speaker Notes...

    - by wulfers
    At a .Net User Group meeting this week, I experienced two poorly prepared speakers floundering through presentations….  As a Lead Technologist at the company I work for, I have experience training technical staff and also giving presentations at code camps.  Here are a few guidelines for aspiring speakers you might find helpful…   1.       Do not stand in front of your audience and read your slides.  This is  offensive to your audience and not what they came for...  Your slides are there to reinforce the information you are presenting and to give the audience a little clarification on some terms you may use and as a visual aid for some complicated issues. 2.       Have someone review your presentation (slides, notes, …) who speaks the language you will be presenting in fluently.  Also record at least ten minutes of your presentation and have that same person review that.  One of the speakers this week used the word “Basically” fifty times in less than thirty minutes…  I started to flinch every time he used the term. 3.       Be Prepared  -  before the presentation begins.  Don’t make any last minute changes to your presentation or demo code the night before.  Don’t patch your laptop or demo servers the night before.  If possible create a virtual image that you only use for presentations and use that (refreshed before every presentation). 4.       Know the level of expertise of your audience.  Speaking above or below their abilities will make or break your presentation. 5.       Deliver what you promise. The presentation this week was supposed to be on BDD (Behavior Driven Develpment).  The presenter completely ran off track and 90% of the discussion was how his team mistakenly used TDD (Test Driven Development), and was unhappy with the results.  Based on his loss of focus we only heard a rushed 10 minute presentation on DBB which was a disservice to the audience. 6.       Practice your presentation with your own small team before you try this on a room full of people you don’t know.  A side benefit of doing this with your own team is that you can get candid feedback from your team and also get kudos for training your own team.  I find I can also turn my presentations into technical white papers and get a third benefit from the work I’ve put into a presentation. 7.       Sharpen your own saw.  Pick a topic that is fairly current.  Something you would like to learn about and would benefit your current career path. 8.       Have fun doing it.

    Read the article

  • Taking a Chomp out of a (Social Network) Product Hype

    - by kellsey.ruppel
    Andrew Kershaw, Senior Director Oracle Social Network Product Development, speaks about Oracle Social Network One of our competitors is being very aggressive with its own developed Social Network add-on, but there should be no doubt in the minds that the Oracle social capabilities available with Fusion CRM stack up well against it. Within the Oracle Cloud, we have announced a product called Oracle Social Network. That technology is pre-integrated into Fusion Applications, enabling your customer to build a collaborative and social enterprise (without all the noise!). Oracle Social Network is designed together with our Fusion Applications. It is very conveniently pre-integrated with CRM, HCM, Financials, Projects, Supply Chain, and the Fusion family. But what's even better is that the individual teams can take a considered approach to what they are trying to achieve within the collaboration process and the outcome they are trying to enable. Then they can utilize the network and collaboration tools to support that result. And there's more! The Fusion teams can design social interactions that bridge across and outside their individual product lines because we have more than just a product line and they know they have the social network to connect them. I know we have a superior product, but it is our ability to understand and execute across the enterprise that will enable us to deliver a much more robust and capable platform in the short term than our competitor can. We have built a product specifically designed for enterprise social collaboration which is not the same for the competition. We have delivered a much more effective solution - one in which individuals can easily collaborate to get results, while being confident that they know who has access to their information. Our platform has been pre-built to cross the company boundaries and enable our customers to collaborate, not just with their customers, but with their partners and suppliers as well. So Fusion addresses the combination of the enterprise application suite with enterprise collaboration and social networking. Oracle Social Network already has a feature function advantage over our competitor's tool providing a real added value to the employees. Plus Oracle has the ability to execute in a broad enterprise and cross-enterprise way that our competitors cannot. We have the power of a tool that provides the core social fabric across all of the applications, as well as supporting enterprise collaboration. That allows us to provide intelligent business insight, connections, and recommendations that our competitor simply can't. From our competitors, customers get integration for Sales; they get integration for Service, but then they have to integrate every other enterprise asset that they have by themselves. With Oracle, we are doing the integration. Fusion Applications will be pre-integrated, and over time, all of the applications in the business suite, including our Applications Unlimited and specialist industry applications, will connect to the Oracle Social Network. I'm confident these capabilities make Oracle Social Network the only collaboration platform on which to deliver the social enterprise.

    Read the article

  • So Much Happening at Devoxx

    - by Tori Wieldt
    Devoxx, the premier Java conference in Europe, has been sold out for a while. The organizers (thanks Stephan and crew!) cap the attendance to make sure all attendees have a great experience, and that speaks volumes about their priorities. The speakers, hackathons, labs, and networking are all first class. The Oracle Technology Network will be there, and if you were smart/lucky enough to get a ticket, come find us and join the fun: IoT Hack Fest Build fun and creative Internet of Things (IoT) applications with Java Embedded, Raspberry Pi and Leap Motion on the University Days (Monday and Tuesday). Learn from top experts Yara & Vinicius Senger and Geert Bevin at two Raspberry Pi & Leap Motion hands-on labs and hacking sessions. Bring your computer. Training and equipment will be provided. Devoxx will also host an Internet of Things shop in the exhibition floor where attendees can purchase Arduino, Raspberry PI and Robot starter kits. Bring your IoT wish list! Video Interviews Yolande Poirier and I will be interviewing members of the Java Community in the back of the Expo hall on Wednesday and Thursday. Videos are posted on Parleys and YouTube/Java. We have a few slots left, so contact me (you can DM @Java) if you want to share your insights or cool new tip or trick with the rest of the developer community. (No commercials, no fluff. Keep it techie and keep it real.)  Oracle Keynote Wednesday morning Mark Reinhold, Chief Java Platform Architect, and Brian Goetz, Java Language Architect will provide an update on Java 8 and beyond. Oracle Booth Drop by the Oracle booth to see old and new friends.  We'll have Java in Action demos and the experts to explain them and answer your questions. We are raffling off Raspberry Pi's each day, so be sure to get your badged scanned. We'll have beer in the booth each evening. Look for @Java in her lab coat.  See you at Devoxx! 

    Read the article

  • Send raw data to USB parallel port after upgrading to 11.10 oneiric

    - by zaphod
    I have a laser cutter connected via a generic USB to parallel adapter. The laser cutter speaks HPGL, as it happens, but since this is a laser cutter and not a plotter, I usually want to generate the HPGL myself, since I care about the ordering, speed, and direction of cuts and so on. In previous versions of Ubuntu, I was able to print to the cutter by copying an HPGL file directly to the corresponding USB "lp" device. For example: cp foo.plt /dev/usblp1 Well, I just upgraded to Ubuntu 11.10 oneiric, and I can't find any "lp" devices in /dev anymore. D'oh! What's the preferred way to send raw data to a parallel port in Ubuntu? I've tried System Settings Printing + Add, hoping that I might be able to associate my device with some kind of "raw printer" driver and print to it with a command like lp -d LaserCutter foo.plt But my USB to parallel adapter doesn't seem to show up in the list. What I do see are my HP Color LaserJet, two USB-to-serial adapters, "Enter URI", and "Network Printer". Meanwhile, over in /dev, I do see /dev/ttyUSB0 and /dev/ttyUSB1 devices for the 2 USB-to-serial adapters. I don't see anything obvious corresponding to the HP printer (which was /dev/usblp0 prior to the upgrade), except for generic USB stuff. For example, sudo find /dev | grep lp produces no output. I do seem to be able to print to the HP printer just fine, though. The printer setup GUI gives it a device URI starting with "hp:" which isn't much help for the parallel adapter. The CUPS administrator's guide makes it sound like I might need to feed it a device URI of the form parallel:/dev/SOMETHING, but of course if I had a /dev/SOMETHING I'd probably just go on writing to it directly. Here's what dmesg says after I disconnect and reconnect the device from the USB port: [ 924.722906] usb 1-1.1.4: USB disconnect, device number 7 [ 959.993002] usb 1-1.1.4: new full speed USB device number 8 using ehci_hcd And here's how it shows up in lsusb -v: Bus 001 Device 008: ID 1a86:7584 QinHeng Electronics CH340S Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x1a86 QinHeng Electronics idProduct 0x7584 CH340S bcdDevice 2.52 iManufacturer 0 iProduct 2 USB2.0-Print iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 96mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Device Status: 0x0000 (Bus Powered)

    Read the article

  • WPF UserControl weird binding problem

    - by Heko
    Hello! Im usign a Ribbon Window and in the "content area beneath" I have a grid in which I will be displaying UserControls. To demonstrate my problem lets take a look at this simple UserControl: <ListView x:Name="lvPersonList"> <ListView.View> <GridView> <GridViewColumn Width="120" Header="Name" DisplayMemberBinding="{Binding Name}"/> <GridViewColumn Width="120" Header="Height" DisplayMemberBinding="{Binding Height}"/> </GridView> </ListView.View> </ListView> And the code: public partial class MyUserControl: UserControl { private List<Person> personList; public TestSnpList() { InitializeComponent(); this.personList = new List<Person>(); this.personList.Add(new Person { Name = "Chuck Norris", Height = 210 }); this.personList.Add(new Person { Name = "John Rambo", Height = 200 }); this.lvPersonList.ItemsSource = personList; } } public class Person { public string Name { get; set; } public int Height { get; set; } } The parent Window: <Grid x:Name="grdContent" DockPanel.Dock="Top"> <controls:MyUserControl x:Name="myUserControl" Visibility="Visible"/> </Grid> I don't understant why this binding doesn't work. Instead of values (Name and Height) I get full class names. If I use this code in a Window it works fine. Any ideas? I would like this user contorl works for itself (it gets the data form the DB and represents it in a ListView)... Thanks!

    Read the article

  • Rails Fixtures Don't Seem to Support Transitive Associations

    - by Rick Wayne
    So I've got 3 models in my app, connected by HABTM relations: class Member < ActiveRecord::Base has_and_belongs_to_many :groups end class Group < ActiveRecord::Base has_and_belongs_to_many :software_instances end class SoftwareInstance < ActiveRecord::Base end And I have the appropriate join tables, and use foxy fixtures to load them: -- members.yml -- rick: name: Rick groups: cals -- groups.yml -- cals: name: CALS software_instances: delphi -- software_instances.yml -- delphi: name: No, this is not a joke, someone in our group is still maintaining Delphi apps... And I do a rake db:fixtures:load, and examine the tables. Yep, the join tables groups_members and groups_software_instances have appropriate IDs in, big numbers generated by hashing the fixture names. Yay! And if I run script/console, I can look at rick's groups and see that cals is in there, or cals's software_instances and delphi shows up. (Likewise delphi knows that it's in the group cals.) Yay! But if I look at rick.groups[0].software_instances...nada. []. And, oddly enough, rick.groups[0].id is something like 30 or 10, not 398398439. I've tried swapping around where the association is defined in the fixtures, e.g. -- members.yml -- rick name: rick -- groups.yml -- cals name: CALS members: rick No error, but same result. (Later: Confirmed, same behavior on MySQL. In fact I get the same thing if I take the foxification out and use ERB to directly create the join tables, so: -- license_groups_software_instances -- cals_delphi: group_id: <%= Fixtures.identify 'cals' % software_instance_id: <%= Fixtures.identify 'delphi' $ Before I chuck it all, reject foxy fixtures and all who sail in her and just hand-code IDs...anybody got a suggestion as to why this is happening? I'm currently working in Rails 2.3.2 with sqlite3, this occurs on both OS X and Ubuntu (and I think with MySQL too). TIA, my compadres.

    Read the article

  • Help with fql.multiQuery

    - by Daniel Schaffer
    I'm playing around with the Facebook API's fql.multiQuery method. I'm just using the API Test Console, and trying to get a successful response but can't seem to figure out exactly what it wants. Here's the text I'm entering into the "queries" field: {"tags" : "select subject from photo_tag where subject != 601599551 and pid in ( select pid from photo_tag where subject = 601599551 ) and subject in ( select uid2 from friend where uid1 = 601599551 )", "foo" : "select uid from user where uid = 601599551"} All it'll give me is a queries parameter: array expected. error. I've also tried just about every permutation I could think of involving wrapping the name/query pairs in their own curly braces, adding brackets, adding whitespace, removing whitespace in case it didn't want an associative array (for those watching the edits, I just found out about these wonderful things now... oy), all to no avail. Is there something painfully obvious I'm missing here, or do I need to make like Chuck Norris Jon Skeet and simply will it to do my bidding? Update: A note to anyone finding this question now: The fql.multiquery test console appears to be broken. You can test your query by clicking on the generated url in the test console and manually adding the "queries" parameter into the querystring.

    Read the article

  • how and where should I set and load NSUserDefaults in a utility app?

    - by Greywolf210
    I have followed directions in several books and suggestions on some forums but I have issues with my app crashing when I try and set user preferences. I have the following lines on my "done" method in my flipscreenViewController: - (IBAction)done { NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults]; [userDefaults setBool:musicOnOff.on forKey:kMusicPreference]; [userDefaults setObject:trackSelection forKey:kTrackPreference]; [self.delegate flipsideViewControllerDidFinish:self];} and the following 2 methods in my mainViewController: -(void)initialDefaults{ NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults]; [userDefaults setBool:YES forKey:kMusicPreference]; [userDefaults setObject:@"Infinity" forKey:kTrackPreference];} -(void) setvaluesFromPreferences { NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults]; BOOL musicSelection = [userDefaults boolForKey:kMusicPreference]; NSString *trackSelection = [userDefaults objectForKey:kTrackPreference]; if(musicSelection == YES) { if([trackSelection isEqualToString:@"Infinity"]) song = [[BGMusic alloc]initWithPath:[[NSBundle mainBundle] pathForResource:@"Infinity" ofType:@"m4a"]]; else if([trackSelection isEqualToString:@"Energy"]) song = [[BGMusic alloc]initWithPath:[[NSBundle mainBundle] pathForResource:@"Energy" ofType:@"m4a"]]; else if([trackSelection isEqualToString: @"Enforcer"]) song = [[BGMusic alloc]initWithPath:[[NSBundle mainBundle] pathForResource:@"Enforcer" ofType:@"m4a"]]; else if([trackSelection isEqualToString: @"Continuum"]) song = [[BGMusic alloc]initWithPath:[[NSBundle mainBundle] pathForResource:@"Continuum" ofType:@"m4a"]]; else if([trackSelection isEqualToString: @"Pursuit"]) song = [[BGMusic alloc]initWithPath:[[NSBundle mainBundle] pathForResource:@"Pursuit" ofType:@"m4a"]]; [song setRepeat:YES]; counter = 0; } else [song close];} Anyone willing to help out? Thanks a bunch, Chuck

    Read the article

  • UI Design, incase of numerous situations

    - by The King
    Hi... I'm creating a web form, where in there are around 12-15 Input Fields... You can have a look at the screen and The request is such that depending on the data the user selects in the Gridview and the DropDown list, the appropriate Textboxes and CheckBoxes needs to be displayed. Some times the conditions are very direct, like when the DDL value is "ABC", get only paid amount from the user. Sometime they are so complex like... IF DDL is "DEF" and Selected GPMS value is between 1000-2000, calculate the values of allowed, paid etc (using some formula) and the focus should be directed to Page No Field, leaving the other fields open incase user wants to edit... There are around 10-15 conditions like this. As this was done through agile, conditions were being added as and when, and wherever it feels appropriate (DDL on change Event, GridView on selecting change event etc... etc..) After completion, now I see the code has become a big chuck, is growing unmanageably... Now, I'm planning to clear this... From you experience, what you think is the best way to handle this. There is a possibility to add more conditions like this in future... Please let me know, incase you need more information. I'm currently developing this app in C# .Net WindowsForms Edit: Currently there are only three items (The Datagrid, the DDL, the OverrideAmt CheckBox) that change the way other fields behave... Almost all of the conditions will fall between the two situations I mentioned... Mostly they belong to "Enabling/Disabling".. "Setting of Values"... and "Changing Focus" or any combination of these.

    Read the article

  • django class with an array of "parent" foreignkeys issue

    - by user298032
    Let's say I have a class called Fruit with child classes of the different kinds of Fruit with their own specific attributes, and I want to collect them in a FruitBasket: class Fruit(models.Model):     type = models.CharField(max_length=120,default='banana',choices=FRUIT_TYPES)     ... class Banana(Fruit):     """banana (fruit type)"""     length = models.IntegerField(blank=True, null=True)     ... class Orange(Fruit):     """orange (fruit type)"""     diameter = models.IntegerField(blank=True, null=True)     ... class FruitBasket(models.Model):     fruits = models.ManyToManyField(Fruit)     ... The problem I seem to be having is when I retrieve and inspect the Fruits in a FruitBasket, I only retrieve the Fruit base class and can't get at the Fruit child class attributes. I think I understand what is happening--when the array is retrieved from the database, the only fields that are retrieved are the Fruit base class fields. But is there some way to get the child class attributes as well without multiple expensive database transactions? (For example, I could get the array, then retrieve the child Fruit classes by the id of each array element). thanks in advance, Chuck

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • Stored procedure woes ... inserting binary ...

    - by Wardy
    Ok so I have this storedproc in my SQL 2008 database (works in 2005 too / used to) ... CREATE PROCEDURE [dbo].[SetBinaryContent] @Ref nvarchar(50), @Content varbinary(MAX), @ObjectID uniqueidentifier AS BEGIN DELETE ObjectContent WHERE ObjectId = @ObjectID AND Ref = @Ref IF DATALENGTH(@Content) > 5 BEGIN INSERT INTO ObjectContent (Ref,BinaryContent,ObjectId) VALUES (@Ref,@Content,@ObjectId) END UPDATE Objects SET [Status] = 1 WHERE ID = @ObjectID END Relatively simple, I take a byte array in C# and chuck it in @Content i then give it a guid and string for the other params and off we go. ... Great, it used to work ... but it don't anymore ... so erm ... What's wrong with this stored proc? I've stepped through my C# code thinking I screwed up somehow in that but it definately adds the params and gives them the correct values so what would cause the server to just stop executing this storedproc correctly? When called this proc executes but nothing changes in the db ... no new records are added to the ObjectContent table. Weird huh ...

    Read the article

  • Django Database design -- Is this a good stragety for overriding defaults

    - by rh0dium
    Hi SO's I have a question on good database design practices and I would like to leverage you guys for pointers. The project started out simple. Hey we have a bunch of questions we want answered for every project (no problem) Which turned into... Hey we have so many questions can we group them into sections (yup we can do that) Which lead into.. Can we weight these questions and I don't really want some of these questions for my project (Yes but we are getting difficult) And then I'm thinking they will want to have each section have it's own weight.. Requirements So there's the requirements - For n number of project Allow a admin member the ability select the questions for a project Allow the admin member to re-weigh or use the default weights for the questions Allow the admin member to re-weight the sections Allow team members to answer the questions. So here is what I came up with. Please feel free to comment and provide better examples models.py from django.db import models from django.contrib.sites.models import Site from django.conf import settings class Section(models.Model): """ This describes the various sections for a checklist: """ name = models.CharField(max_length=64) description = models.TextField() class Question(models.Model): """ This simply provides a simple way to list out the questions. """ question = models.CharField(max_length=255) answer_type = models.CharField(max_length=16) description = models.TextField() section = models.ForeignKey(Section) class ProjectQuestion(models.Model): """ These are the questions relevant to the project """ question = models.ForeignKey(Question) answer = models.CharField(max_length=255) required = models.BooleanField(default=True) weight = models.FloatField(default = XXX) class Project(models.Model): """ Here is where we want to gather our questions """ questions = models.ManyToManyField(ProjectQuestion) Immediate questions: - When I start a project - any ideas on how to "pre-populate" the questions (and ultimately the weights) for the project? - Is there a generally accepted method for doing this process that I am missing? Basically the idea that you refer to the questions overide your own default weight, and store the answer? - It appears that a good chuck of the work will be done in the views and that a lot of checking will need to occur there? Is that OK? Again - feel free to give me better strategies!! Thanks

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

  • NSFetchedResultsController on secondary UITableView - how to query data?

    - by Jason
    I am creating a core-data based Navigation iPhone app with multiple screens. Let's say it is a flash-card application. The data model is very simple, with only two entities: Language, and CardSet. There is a one-to-many relationship between the Language entity and the CardSet entities, so each Language may contain multiple CardSets. In other words, Language has a one-to-many relationship Language.cardSets which points to the list of CardSets, and CardSet has a relationship CardSet.language which points to the Language. There are two screens: (1) An initial TableView screen, which displays the list of languages; and (2) a secondary TableView screen, which displays the list of CardSets in the Language. In the initial screen, which lists the languages, I am using NSFetchedResultsController to keep the list of languages up-to-date. The screen passes the Language selected to the secondary screen. On the secondary screen, I am trying to figure out whether I should again use an NSFetchedResultsController to maintain the list of CardSets, or if I should work through Language.cardSets to simply pull the list out of the object model. The latter makes the most sense programatically because I already have the Language - but then it would not automatically be updated on changes. I have looked at the NSFetchedResultsController documentation, and it seems like I can easily create predicates based on attributes - but not relationships. I.e., I can create the following NSFetchedResultsController: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name LIKE[c] 'Chuck Norris'"]; How can I access my data through the direct relationship - Language.cardSets - and also have the table auto-update using NSFetchedResultsController? Is this possible?

    Read the article

  • print friendly version of floated list

    - by Brad
    I have a list of phone extensions that I want to print a friendly version of it. I have a print css for it to print appropriately onto paper, the extensions are located within an unordered list, which are floated to the left. <ul> <li>Larry Hughes <span class="ext">8291</span></li> <li>Chuck Davis <span class="ext">3141</span></li> <li>Kevin Skillis <span class="ext">5115</span></li> </ul> I float it left, and when it prints the second page, it leaves off the name part of the list (in Firefox, works fine in Google Chrome and IE), see here: http://cl.ly/de965aea63f66c13ba32 I am referring to this: http://www.alistapart.com/articles/goingtoprint/ - they mentioned something about applying a float:none; to the content part of the page. If I do that, how should I go about making the list show up in 4 columns? It is a dynamic list, pulled from a database. Any help is appreciated.

    Read the article

  • Configure Postfix to send/relay emails Gmail (smtp.gmail.com) via port 587

    - by tom smith
    Hi. Using Centos 5.4, with Postfix. I can do a mail [email protected] subject: blah test . Cc: and the msg gets sent to gmail, but it resides in the spam folder, which is to be expected. My goal is to be able to generate email msgs, and to have them appear in the regular Inbox! As I understand Postfix/Gmail, it's possible to configure Postfix to send/relay mail via the authenticated/valid user using port 587, which would no longer have the mail be seen as spam. I've tried a number of parameters based on different sites/articles from the 'net, with no luck. Some of the articles, actually seem to conflict with other articles! I've also looked over the stacflow postings on this, but i'm still missing something... Also talked to a few people on IRC (Centos/Postfix) and still have questions.. So, i'm turning to Serverfault, once again! If there's someone who's managed to accomplish this, would you mind posting your main.cf, sasl-passwd, and any other conf files that you use to get this working! If I can review your config files, I can hopefully see where I've screwed up, and figure out how to correct the issue. Thanks for reading this, and any help/pointers you provide! ps, If there is a stackflow posting that speaks to this that I may have missed, feel free to point it out to me! -tom

    Read the article

  • Instabilities with Bridged and bonded interfaces

    - by Henry-Nicolas Tourneur
    I did post yesterday to get a working setup with several bridged interfaces used for virtual machines (KVM/libvirt). One of the bridged interface is just using eth3 as its ports while the second one (public traffic) is using an ethernet bonded interface. That setup is working but not all the time ! I can start a download from a vm, then it will stop and freeze! So I don't know if my bridge parameters are correct, could you check the below config ? iface eth3 inet manual auto bond0 iface bond0 inet manual slaves eth1 eth2 pre-up ip link set bond0 up down ip link set bond0 down auto br0 iface br0 inet static address 10.160.0.7 netmask 255.255.255.128 bridge_ports eth3 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br0:1 iface br0:1 inet static address 10.160.0.9 netmask 255.255.255.255 auto br0:2 iface br0:2 inet static address 10.160.0.10 netmask 255.255.255.255 auto br1 iface br1 inet static address 217.4.40.242 netmask 255.255.255.240 gateway 217.4.40.241 pre-up /etc/network/firewall start bridge_ports bond0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br1:1 iface br1:1 inet static address 217.4.40.252 netmask 255.255.255.255 auto br1:2 iface br1:2 inet static address 217.4.40.253 netmask 255.255.255.255 And yes, it also sometimes speaks about martian on the host: kernel: [249146.055172] martian source 10.160.0.17 from 10.160.0.10, on dev vnet2 kernel: [249146.073122] ll header: ff:ff:ff:ff:ff:ff:54:52:00:76:c3:5c:08:06

    Read the article

  • Configure PEAR on CentOS 6 and PLESK

    - by RCNeil
    I'm hoping to get a little assistance with configuring PEAR to work properly. I have a PHP file that's calling PEAR's mail and mail-mime files, and I believe I am missing some steps because I keep getting the very common Warning: include_once(Mail.php): failed to open stream: No such file or directory Warning: include_once(Mail_Mime/mime.php): failed to open stream: No such file or directory It is installed - Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.3.7 stable Console_Getopt 1.2.3 stable Mail 1.2.0 stable Mail_Mime 1.8.3 stable PEAR 1.9.4 stable Structures_Graph 1.0.4 stable XML_RPC 1.5.4 stable XML_Util 1.2.1 stable And according to this TUT, I need to configure it appropriately in each vhost. I have already gone through and adjusted the php.ini file, but when the TUT speaks of the php_admin_value open_basedir "/var/www/vhosts/example.com/httpdocs:/tmp:/usr/share/pear:/local/PEAR" in my /var/www/vhosts/example.com/conf/httpd.include file I kind of get lost. There are several httpd.include files in that directory, all preceded with very long numerical strings. All I want to do is have an email attachment in my form.... Any insight or similar experiences shared would be greatly appreciated.

    Read the article

  • Using mod_rewrite to mask /cgi-bin/abc as /def

    - by Alois Mahdal
    I have a seemingly easy task, but somehow I just can't get it to work: Some interesting lines from my httpd.conf: ... DocumentRoot "D:/opt/apache/htdocs" ... ScriptAlias /cgi-bin/ "D:/opt/apache/cgi-bin/" ... <Directory "D:/opt/apache/htdocs"> Options Indexes FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> <Directory "D:/opt/apache/cgi-bin/"> AllowOverride None Options ExecCGI Order allow,deny Allow from all </Directory> (I know it's dumb but it's only a testing machine :D.) Now, I have d:\opt\apache\cgi-bin\expired.pl and I expect GET /licensecheck.php?code=123456. And I wish to fake client into thinking it speaks with /licensecheck.php, but actually return data by \expired.pl. What I tried was setting following at the end of http.conf: RewriteEngine on RewriteRule ^/licensecheck.php$ /cgi-bin/expired.pl [T=application/x-httpd-cgi,L] ...but it keeps 404-ing me, looking for cgi-bin directory (not cgi-bin\expired.pl) in my DocumentRoot! [error] [client 127.0.0.1] script not found or unable to stat: D:/opt/apache/htdocs/cgi-bin /cgi-bin/expired.pl and all other scripts in /cgi-bin/ work as expected, Only way I could make it work was actually putting the \expired.pl to DocumentRoot, but I don't want this, I want my cgi-bin neatly separated :)

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15  | Next Page >