Search Results

Search found 719 results on 29 pages for 'ray wits'.

Page 24/29 | < Previous Page | 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Entity Framework Update Error in ASP.NET Mvc with related entity

    - by Barry
    I have run into a problem which have searched and tried everything i can to find a solution but to no avail. I am using the same repository and context throughout the process I have a booking entity and a userExtension Entity Below is my image i then get my form collection back from my page and create a new booking public ActionResult Create(FormCollection collection) { Booking toBooking = new Booking(); i then do some validation and property assignment and find an associated BidInstance toBooking.BidInstance = bid; i have checked and the bid is not null. finally i get the user extension file from the Current IPRINCIPAL USER as below UserExtension loggedInUser = m_BookingRepository.GetBookingCurrentUser(User); toBooking.UserExtension = loggedInUser; The Code to do the getUserExtension is : public UserExtension GetBookingCurrentUser(IPrincipal currentUser) { var user = (from u in Context.aspnet_Users .Include("UserExtension") where u.UserName == currentUser.Identity.Name select u).FirstOrDefault(); if (user != null) { var userextension = (from u in Context.UserExtension.Include("aspnet_Users") where u.aspnet_Users.UserId == user.UserId select u).FirstOrDefault(); return userextension; } else{ return null; } } It returns the userextension fine and assigns it fine. i originally used the aspnet_users but encountered this problem so tried to change it to the extension entity. as soon as i call the : Context.AddToBooking(booking); Context.SaveChanges(); i get the following exception and im completely baffled by how to fix it Entities in 'FutureFlyersEntityModel.Booking' participate in the 'FK_Booking_UserExtension' relationship. 0 related 'UserExtension' were found. 1 'UserExtension' is expected. then the final error that comes to the front end is: Metadata information for the relationship 'FutureFlyersModel.FK_Booking_BidInstance' could not be retrieved. Make sure that the EdmRelationshipAttribute for the relationship has been defined in the assembly. Parameter name: relationshipName.. But both the related entities are set in the booking entity passed thruogh PLEASE HELP Im at wits end with this

    Read the article

  • Rails: constraint violation on create but not on update

    - by justinbach
    Note: This is a "railsier" (and more succinct) version of this question, which was getting a little long. I'm getting Rails behavior on a production server that I can't replicate on the development server. The codebases are identical save for credentials and caching settings, and both are powered by Oracle 10g databases with identical schema (but different data). My Rails application contains a user model, which has_one registration; registration in turn has_and_belongs_to_many company_ownerships through a registration_ownerships table. Upon registering, users fill out data pertinent to all three models, including a series of checkboxes indicating what registration_ownerships might apply to their account. On the dev server, the registration process is seamless, no matter what data is entered. On production, however, if users check off any of the company ownership fields before submitting their registration, Oracle complains about a constraint violation on the primary key of the company_ownerships table (which is a two-field key based on company_ownership_id and registration_id) and users get the standard Rails 500 error screen. In every case, I've verified that no conflicting record on these two fields exists in the production database, so I don't know why the constraint is getting violated. To further confuse things, if a user registers without listing any ownerships and later goes back and modifies their account to reflect ownership data (which is done through the same interface), the application happily complies with their request and Oracle is well-behaved (this is both on production and dev). I've spent the past couple days trying to figure out what might be causing this problem and am reaching the end of my wits. Any advice would be greatly appreciated!

    Read the article

  • NSURLConnection and empty post variables

    - by SooDesuNe
    I'm at my wits end with this one, because I've used very similar code in the past, and it worked just fine. The following code results in empty $_POST variables on the server. I verified this with: file_put_contents('log_file_name', "log: ".$word, FILE_APPEND); the only contents of log_file_name was "log: " I then verified the PHP with a simple HTML form. It performed as expected. The Objective-C: NSString *word = "this_word_gets_lost"; NSString *myRequestString = [NSString stringWithFormat:@"word=%@", word]; [self postAsynchronousPHPRequest:myRequestString toPage:@"http://www.mysite.com/mypage.php" delegate:nil]; } -(void) postAsynchronousPHPRequest:(NSString*)request toPage:(NSString*)URL delegate:(id)delegate{ NSData *requestData = [ NSData dataWithBytes: [ request UTF8String ] length: [ request length ] ]; NSMutableURLRequest *URLrequest = [ [ NSMutableURLRequest alloc ] initWithURL: [ NSURL URLWithString: URL ] ]; [ URLrequest setHTTPMethod: @"POST" ]; [ URLrequest setHTTPBody: requestData ]; [ NSURLConnection connectionWithRequest:URLrequest delegate:delegate]; [URLrequest release]; } The PHP: $word = $_POST['word']; file_put_contents('log_file_name', "log: ".$word, FILE_APPEND); What am I doing wrong in the Objective-C that would cause the $_POST variable to be empty on the server?

    Read the article

  • Access report not showing data

    - by Brian Smith
    I have two queries that I am using to generate a report from, the problem is when I run the report, three fields do not show any data at all for some reason. Query 1: SELECT ClientSummary.Field3 AS PM, ClientSummary.[Client Nickname 2] AS [Project #], ClientSummary.[Client Nickname 1] AS Customer, ClientSummary.[In Reference To] AS [Job Name], ClientSummary.Field10 AS Contract, (select sum([Billable Slip Value]) from Util_bydate as U1 where U1.[Client Nickname 2] = ClientSummary.[Client Nickname 2]) AS [This Week], (select sum([Billable Slip Value]) from Util as U2 where U2.[Client Nickname 2] = ClientSummary.[Client Nickname 2] ) AS [To Date], [To Date]/[Contract] AS [% Spent], 0 AS Backlog, ClientSummary.[Total Slip Fees & Costs] AS Billed, ClientSummary.Payments AS Paid, ClientSummary.[Total A/R] AS Receivable, [Forms]![ReportMenu]![StartDate] AS [Start Date], [Forms]![ReportMenu]![EndDate] AS [End Date] FROM ClientSummary; Query 2: SELECT JobManagement_Summary.pm, JobManagement_Summary.[project #], JobManagement_Summary.Customer, JobManagement_Summary.[Job Name], JobManagement_Summary.Contract, IIf(IsNull([This Week]),0,[This Week]) AS [N_This Week], IIf(IsNull([To Date]),0,[To Date]) AS [N_To Date], [% Spent], JobManagement_Summary.Backlog, JobManagement_Summary.Billed, JobManagement_Summary.Paid, JobManagement_Summary.Receivable, JobManagement_Summary.[Start Date], JobManagement_Summary.[End Date] FROM JobManagement_Summary; When I run the report from query 2 these 3 fields don't appear. N_This Week, N_To Date and % Spent. All have no data. It isn't the IIF functions, as it doesn't matter if I have those in there or remove them. Any thoughts? If I connect directly to the first recordset it works fine, but then SQL throws the error message: Multi-level GROUP BY cause not allowed in subquery. Is there any way to get around that message to link to it directly or does anyone have ANY clue why these fields are coming back blank? I am at wits end here!

    Read the article

  • Accessing global variable in multithreaded Tomcat server

    - by jwegan
    I have a singleton object that I construct like thus: private static volatile KeyMapper mapper = null; public static KeyMapper getMapper() { if(mapper == null) { synchronized(Utils.class) { if(mapper == null) { mapper = new LocalMemoryMapper(); } } } return mapper; } The class KeyMapper is basically a synchronized wrapper to HashMap with only two functions, one to add a mapping and one to remove a mapping. When running in Tomcat 6.24 on my 32bit Windows machine everything works fine. However when running on a 64 bit Linux machine (CentOS 5.4 with OpenJDK 1.6.0-b09) I add one mapping and print out the size of the HashMap used by KeyMapper to verify the mapping got added (i.e. verify size = 1). Then I try to retrieve the mapping with another request and I keep getting null and when I checked the size of the HashMap it was 0. I'm confident the mapping isn't accidentally being removed since I've commented out all calls to remove (and I don't use clear or any other mutators, just get and put). The requests are going through Tomcat 6.24 (configured to use 200 threads with a minimum of 4 threads) and I passed -Xnoclassgc to the jvm to ensure the class isn't inadvertently getting garbage collected (jvm is also running in -server mode). I also added a finalize method to KeyMapper to print to stderr if it ever gets garbage collected to verify that it wasn't being garbage collected. I'm at my wits end and I can't figure out why one minute the entry in HashMap is there and the next it isn't :(

    Read the article

  • Evaluating Javascript Arrays

    - by FailBoy
    I have an array that contains an array of arrays if that makes any sense. so for example: [[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6]] I want to see whether an array exists withing the array, so if [1, 2, 3] is duplicated at all. I have tried to use the .indexOf method but it does find the duplicate. I have also tried Extjs to loop through the array manually and to evaluate each inner array, this is how I did it: var arrayToSearch = [[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6]]; var newArray = [1, 2, 3]; Ext.each(arrayToSearch, function(entry, index){ console.log(newArray, entry); if(newArray == entry){ console.log(index); }; }); This also does not detect the duplicate. the console.log will output [1, 2, 3] and [1, 2, 3] but will not recognize them as equal. I have also tried the === evaluator but obviously since == doesn't work the === wont work. I am at wits end, any suggestions.

    Read the article

  • How to convert between different currencies?

    - by sil3nt
    Hey there, this is part of a question i got in class, im at the final stretch but this has become a major problem. In it im given a certain value which is called the "gold value" and it is 40.5, this value changes in input. and i have these constants const int RUBIES_PER_DIAMOND = 5; // relative values. * const int EMERALDS_PER_RUBY = 2; const int GOLDS_PER_EMERALDS = 5; const int SILVERS_PER_GOLD = 4; const int COPPERS_PER_SILVER = 5; const int DIAMOND_VALUE = 50; // gold values. * const int RUBY_VALUE = 10; const int EMERALD_VALUE = 5; const float SILVER_VALUE = 0.25; const float COPPER_VALUE = 0.05; which means that basically for every diamond there are 5 rubies, and for every ruby there are 2 emeralds. So on and so forth. and the "gold value" for every diamond for example is 50 (diamond value = 50) this is how much one diamond is worth in golds. my problem is converting 40.5 into these diamonds and ruby values. I know the answer is 4rubies and 2silvers but how do i write the algorithm for this so that it gives the best estimate for every goldvalue that comes along?? please help!, im at my wits end

    Read the article

  • Urgent help! how do i convert this?..

    - by sil3nt
    Hey there, this is part of a question i got in class, im at the final stretch but this has become a major problem. In it im given a certain value which is called the "gold value" and it is 40.5, this value changes in input. and i have these constants const int RUBIES_PER_DIAMOND = 5; // relative values. * const int EMERALDS_PER_RUBY = 2; const int GOLDS_PER_EMERALDS = 5; const int SILVERS_PER_GOLD = 4; const int COPPERS_PER_SILVER = 5; const int DIAMOND_VALUE = 50; // gold values. * const int RUBY_VALUE = 10; const int EMERALD_VALUE = 5; const float SILVER_VALUE = 0.25; const float COPPER_VALUE = 0.05; which means that basically for every diamond there are 5 rubies, and for every ruby there are 2 emeralds. So on and so forth. and the "gold value" for every diamond for example is 50 (diamond value = 50) this is how much one diamond is worth in golds. my problem is converting 40.5 into these diamonds and ruby values. I know the answer is 4rubies and 2silvers but how do i write the algorithm for this so that it gives the best estimate for every goldvalue that comes along?? please help!, im at my wits end

    Read the article

  • Python NameError when attempting to use a user-defined class

    - by Michael Herold
    I'm getting a weird instance of a NameError when attempting to use a class I wrote. In a directory, I have the following file structure: dir/ ReutersParser.py test.py reut-xxx.sgm Where my custom class is defined in ReutersParser.py and I have a test script defined in test.py. The ReutersParser looks something like this: from sgmllib import SGMLParser class ReutersParser(SGMLParser): def __init__(self, verbose=0): SGMLParser.__init__(self, verbose) ... rest of parser if __name__ == '__main__': f = open('reut2-short.sgm') s = f.read() p = ReutersParser() p.parse(s) It's a parser to deal with SGML files of Reuters articles. The test works perfectly. Anyway, I'm going to use it in test.py, which looks like this: from ReutersParser import ReutersParser def main(): parser = ReutersParser() if __name__ == '__main__': main() When it gets to that parser line, I'm getting this error: Traceback (most recent call last): File "D:\Projects\Reuters\test.py", line 34, in <module> main() File "D:\Projects\Reuters\test.py", line 19, in main parser = ReutersParser() File "D:\Projects\Reuters\ReutersParser.py", line 38, in __init__ SGMLParser.__init__(self, verbose) NameError: global name 'sgmllib' is not defined For some reason, when I try to use my ReutersParser in test.py, it throws an error that says it cannot find sgmllib, which is a built-in module. I'm at my wits' end trying to figure out why the import won't work. What's causing this NameError? I've tried importing sgmllib in my test.py and that works, so I don't understand why it can't find it when trying to run the constructor for my ReutersParser.

    Read the article

  • Rails - HABTM Relationship -- How Can I Find A Record Based On An Attribute Of The Associated Model

    - by ChrisWesAllen
    I have setup this HABTM relationship in the past and its worked before....Now it isnt and I'm at my wits end trying to figure out whats wrong. I've looking through the rails guides all day and cant seem to figure out what I'm doing wrong, so help would really be appreciated. I have 2 models connected through a join model and I'm trying to find records based an attribute of the associated model. Event.rb has_and_belongs_to_many :interests Interest.rb has_and_belongs_to_many :events and a join table migration that was created like create_table 'events_interests', :id => false do |t| t.column :event_id, :integer t.column :interest_id, :integer end I tried @events = Event.all(:include => :interest, :conditions => [" interest.id = ?", 4 ] ) But got the error "Association named 'interest' was not found; perhaps you misspelled it?"... which I didnt of course I tried @events = Event.interests.find(:all, :conditions => [" interest.id = ?", 4 ] ) but got the error "undefined method `interests' for #Class:0x4383348" How can I find the Events that have an interest id of 4....I'm definitely going bald from this lol

    Read the article

  • JavaOne in Brazil

    - by janice.heiss(at)oracle.com
    JavaOne in Brazil, currently taking place in Sao Paolo, is one event I'd love to attend. I once heard "father of Java" James Gosling talk about Java developers throughout the world. He observed that there were good developers everywhere. It was not the case, he said, that that the really good developers are in one place and the not-so-good developers are in another. He encountered excellent developers everywhere. Then he paused and said that the craziest developers were definitely the Brazilians. As anyone who knows James would realize, this was meant as high praise. He said the Brazilians would work through the night on projects and were very enthusiastic and spontaneous - features that Brazilian culture is known for. Brazilian developers are responsible for creating one of the most impressive uses of Java ever - the applications that run the Brazilian health services. Starting from scratch they created a system that enables an expert doctor in Rio to look at an X-Ray of a patient near the Amazon and offer advice. One of the main architects of this was Java Champion Fabinane Nardon the distinguished Brazilian Java architect and open-source evangelist. As she writes in her blog:"In 2003, I was invited to assemble a team and architect a Public Healthcare Information System for the city of São Paulo, the largest in Latin America, with 14 million inhabitants. The resulting software had 2.5 million of lines of code and it was created, from specification to production, in only 10 months. At the time, the software was considered the largest J2EE application in the world and was featured in several articles, as this one. As a result, we won the Duke's Choice Award in 2005 during JavaOne, the largest development conference in the world. At the time, Sun Microsystems make a short documentary about our work." "In 2007, a lightning struck twice and I was again invited to assemble a new team and architect an even larger information system for healthcare. And thus I became CTO and one of the founders of Zilics Healthcare Information Systems. "In 2010, I started to research and work on Cloud Computing technology and became leader of the LSI-TEC Cloud Computing group. LSI-TEC is a research laboratory in the University of Sao Paulo, one of the best in Brazil. Thus, I became one of the ghost writers behind the popular Cloud Computing Twitter @the_cloud."You can see and hear Nardon in a 4 minute documentary on Java and the Brazilian health care system produced by Sun Microsystems. And you can listen to a September 2010 podcast with Nardon and her fellow Brazilian Java Champion Bruno Souza (known in Brazil as "Java Man") here at 11:10 minutes into the podcast.Next year, I'll hope to be reporting in Brazil at JavaOne!

    Read the article

  • Oracle Enterprise Manager users present today at Oracle Users Forum

    - by Anand Akela
    Oracle Users Forum starts in a few minutes at Moscone West, Levels 2 & 3. There are more than hundreds of Oracle user sessions during the day. Many Oracle Oracle Enterprise Manager users are presenting today as well.  In addition, we will have a Twitter Chat today from 11:30 AM to 12:30 PM with IOUG leaders, Enterprise Manager SIG contributors and many speakers. You can participate in the chat using hash tag #em12c on Twitter.com or by going to  tweetchat.com/room/em12c      (Needs Twitter credential for participating).  Feel free to join IOUG and Enterprise team members at the User Group Pavilion on 2nd Floor, Moscone West. RSVP by going http://tweetvite.com/event/IOUG  . Don't miss the Oracle Open World welcome keynote by Larry Ellison this evening at 5 PM . Here is the complete list of Oracle Enterprise Manager sessions during the Oracle Users Forum : Time Session Title Speakers Location 8:00AM - 8:45AM UGF4569 - Oracle RAC Migration with Oracle Automatic Storage Management and Oracle Enterprise Manager 12c VINOD Emmanuel -Database Engineering, Dell, Inc. Wendy Chen - Sr. Systems Engineer, Dell, Inc. Moscone West - 2011 8:00AM - 8:45AM UGF10389 -  Monitoring Storage Systems for Oracle Enterprise Manager 12c Anand Ranganathan - Product Manager, NetApp Moscone West - 2016 9:00AM - 10:00AM UGF2571 - Make Oracle Enterprise Manager Sing and Dance with the Command-Line Interface Ray Smith - Senior Database Administrator, Portland General Electric Moscone West - 2011 10:30AM - 11:30AM UGF2850 - Optimal Support: Oracle Enterprise Manager 12c Cloud Control, My Oracle Support, and More April Sims - DBA, Southern Utah University Moscone West - 2011 12:30PM-2:00PM UGF5131 - Migrating from Oracle Enterprise Manager 10g Grid Control to 12c Cloud Control    Leighton Nelson - Database Administrator, Mercy Moscone West - 2011 2:15PM-3:15PM UGF6511 -  Database Performance Tuning: Get the Best out of Oracle Enterprise Manager 12c Cloud Control Mike Ault - Oracle Guru, TEXAS MEMORY SYSTEMS INC Tariq Farooq - CEO/Founder, BrainSurface Moscone West - 2011 3:30PM-4:30PM UGF4556 - Will It Blend? Verifying Capacity in Server and Database Consolidations Jeremiah Wilton - Database Technology, Blue Gecko / DatAvail Moscone West - 2018 3:30PM-4:30PM UGF10400 - Oracle Enterprise Manager 12c: Monitoring, Metric Extensions, and Configuration Best Practices Kellyn Pot'Vin - Sr. Technical Consultant, Enkitec Moscone West - 2011 Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • links for 2011-01-06

    - by Bob Rhubart
    Coming to your town: Oracle Enterprise Cloud Summit During these full-day events, cloud experts will share real-world best practices, reference architectures, detailed customer case studies, and more. Events scheduled in cities around the world.  (tags: oracle otn cloud event) Webcast: Security and Compliance for Private Cloud Consolidation Roxana Bradescu, Senior Director for Oracle Database Security Products, discusses Oracle Database Security Solutions to securely consolidate data and meet compliance requirements within private cloud computing environments. Thursday, January 13, 2011. 10am PST | 1pm EST (tags: oracle cloud security) Answering Questions about Mobile Devices | The AppsLab "How do the numbers of Android and iOS users compare? How often are people switching? Where are all these BlackBerry and Nokia users? Do they plan to jump to Android or iOS? What about webOS? Is it relevant?" Some answers in this AppsLab survey. (tags: oracle otn enterprise2.0 mobilecomputing iphone blackberry android) Webcast: Achieve 24/7 Cloud Availability Without Expensive Redundancy Ashish Ray and Matthew Baier discuss Oracle’s Maximum Availability Architecture and Oracle Database 11g. (tags: oracle cloud highavailability webcast) Converting a PV vm back into an HVM vm (Wim Coekaerts Blog) "I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel...It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps." - Wim Coekaerts (tags: oracle otn virtualization oraclevm) @OTN_Garage: Resources for VirtualBox 4.0 Rick "@OTN_Garage" Ramsey shares links to several resources for those with a VirtualBox jones. (tags: oracle otn virtualization virtualbox) 'Federal Service Bus' Helps Belgian Government Speak a Common Language - SOA in Action Blog "The first SOA-enabled application was developed in less than two months and was fully operational in approximately 10 weeks. In addition, new FSB modules are reusable for other Belgian e-government applications, saving both time and taxpayer dollars." - Joe McKendrick (tags: soa oracle) Show Notes: Architects in the Cloud (ArchBeat Podcast) The complete 4-part interview with Stephen G. Bennett and Archie Reed, the authors of "Silver Clouds, Dark Linings: A Concise Guide to Cloud Computing," is now available. (tags: oracle otn cloud podcast archbeat)

    Read the article

  • DOAG 2012 and Educause 2012

    - by Chris Kawalek
    Oracle understands the value of desktop virtualization and how customers have really embraced it as a top tier method to deliver access to applications and data. Just as supporting operating systems other than Windows in the enterprise desktop space started to become necessary perhaps 5-7 years ago, supporting desktop virtualization with VDI, application virtualization, thin clients, and tablet access is becoming necessary today in 2012. Any application strategy needs to have a secure mobile component, and a solution that gives you a holistic strategy across both mobile and fixed-asset (i.e., desktop PCs) devices is crucial to success. This means it's probably useful to learn about desktop virtualization, even if it's not in your typical area of responsibility. A good way to do that is at one of the many trade shows where we exhibit. Here are two examples:  DOAG 2012 Conference + Exhibition The DOAG Conference is fast approaching, starting November 20th in Nuremberg, Germany. If you've been reading this blog for a while, you might remember that we attended last year as well. This conference is fantastic for us because we get to speak directly to users of Oracle products. In many cases, those DBAs, IT managers, and other infrastructure folks are looking for ways to deal with the burgeoning BYOD model, as well as ways of streamlining their standard desktop and access technologies. We have a couple of sessions where you can learn a great deal about how Oracle can help with these points. Session Schedule (look under "Infrastruktur & Hardware") The two sessions focused on desktop virtualization are: Oracle VDI Best Practice unter Linux (Oracle VDI Best Practice Under Linux) Virtual Desktop Infrastructure Implementierungen und Praxiserfahrungen (Virtual Desktop Infrastructures Implementations and Best Practices) We will also have experts on hand at the booth to answer your questions on using desktop virtualization. If you're at the show, please stop by and say hello to our team there! Educause 2012  Another good example is Educause. We've gone the last few years to show off a slough of education oriented applications and capabilities in the Oracle product portfolio. And every year, we display those applications through Oracle desktop virtualization. This means the demonstration can easily be setup ahead of time and replicated out to however many "demo pods" that we have available. There's no need for our product teams to setup individual laptops for demos -- we can display a standardized Windows desktop virtual machine with their apps all ready to go on a whole bunch of devices like your standard trade show laptop, our Sun Ray Clients, and iPad. Educause 2012 just wrapped, so we're sorry we missed you this year. But there is always next year! Until then, here are a few pictures from this year's show: You can also watch this video to see how Catholic Education Australia uses Oracle Secure Global Desktop to help cope with the ever changing ways that people access their applications.  -Chris 

    Read the article

  • The Social Business Thought Leaders

    - by kellsey.ruppel
    Enterprise Gamification, Big Data, Social Support, Total Customer Experience, Pull Organizations, Social Business. Are these purely the latest buzzwords to enter the market or significant trends that companies should keep an eye on? Oracle recently sponsored and presented at the 5th Social Business Forum, one of the largest European events on the use of social media as a business tool and accelerator. Through the participation of dozens of practitioners, experts and customer success stories, the conference demonstrated how a perfect storm of technology, management and cultural change is pushing peer-to-peer conversations deep into business processes. It is clear that Social Business is serving as a new propellant of agility, efficiency and reactivity. According to Deloitte and MIT what we have learned to call Social Business is considered important in the next 3 years by 86% of managers (see Social Business: What Are Companies Really Doing?, MIT Sloan Management Review and Deloitte). McKinsey further estimates the value that can be unlocked in terms of knowledge-worker productivity, consumer insights, product co-creation, improved sales, marketing and customer service up to $1300B (See The social economy: Unlocking value and productivity through social technologies, McKinsey Global Institute). This impacts any industry, with the strongest effects seen in Media & Entertainment, Technology, Telcos and Education. For those not able to attend the Social Business Forum and also for the many friends that joined us in Milan, we decided to keep the conversation going by extracting some golden nuggets from the perspective of five of the most well-known thought-leaders in this space. Starting this week you will have the chance to view: John Hagel (Author of the Power of Pull and Co-Chairman Center for the Edge at Deloitte & Touche) Christian Finn (Senior Director, WebCenter Evangelist at Oracle) Steve Denning (Author of The Radical Management and Independent Management Consulting Professional) Esteban Kolsky (Principal & Founder at ThinkJar) Ray Wang (Principal Analyst & CEO at Constellation Research) Stay tuned to hear: How pull organizations are addressing some of the deepest challenges impacting the market. How to integrate social into existing infrastructure and processes. How to apply radical management to become more agile and profitable. About the importance of gamification as an engagement lever. The first interview with John Hagel will be published tomorrow. Don't miss it and the entire series!

    Read the article

  • links for 2010-12-22

    - by Bob Rhubart
    @hajonormann: BPM: Top Seven Architectural Topics in 2010 Oracle ACE Director Hajo Normann offers details on how to design a BPM/SOA solution including: modeling human interaction, improving BPM models, orchestrating composed services, central task management, new approaches for business-IT alignment, solutions for non-deterministic processes, and choreography. (tags: oracle otn soasymposium infoq soa bpm) InfoQ: Simplicity, The Way of the Unusual Architect Dan North talks about the tendency developers-becoming-architects have to create bigger and more complex systems. Without trying to be simplistic, North argues for simplicity, offering strategies to extract the simple essence from complex situations. (tags: ping.fm) Fun with Sun Ray, 3D, Oracle VM x86 and SRIOV (Wim Coekaerts Blog) "One of the things I like about my job is that I get to play around with stuff and make use of the technologies we work on in my teams. Sort of my own little playground." - Wim Coekaerts (tags: oracle otn virtualization oraclevm) Oracle VM VirtualBox 4.0.0 Released! (Oracle's Virtualization Blog) And you were worried about what to get that special someone for Christmas... (tags: oracle otn virtualization virtualbox) Virtual Developer Day: Oracle WebLogic Server & Java EE (#OTNVDD) (Oracle Technology Network Blog (aka TechBlog)) "Virtual Developer Day is back with a vengeance! On Feb. 1, login to learn how Oracle WebLogic Server enables a whole new level of productivity for enterprise developers." Registration is open. (tags: oracle otn events webinar java) New Coherence 3.6 Oracle University Course (Cristóbal Soto's Blog) Cristóbal Soto shares information on the "Oracle Coherence 3.6: Share and Manage Data in Clusters" course now available through Oracle University. (tags: oracle otn grid coherence) The Aquarium: Oracle WebLogic Server & Java EE developer day "Oracle WebLogic is well on its way to contribute to the general Java EE 6 momentum and the OTN Blog has just announced a Virtual Developer Day for Oracle WebLogic." (tags: oracle otn weblogic java) Enterprise 2.0 Use Cases for Semantic Web (Reiser 2.0) "How can an enterprise improve the efficiency and effectiveness of their Knowledge and Community model leveraging semantic technologies and social networking dynamics?" - Peter Reiser (tags: oracle otn enterprise2.0 semanticweb) John Gøtze: European Interoperability Framework 2.0 "This week, the European Commission announced an updated interoperability policy in the EU. The Commission has committed itself to adopt a Communication that introduces the European Interoperability Strategy (EIS) and an update to the European Interoperability Framework (EIF)..." - John Gøtze (tags: entarch Interoperability) Andy Mulholland: Maybe Web 3.0 is quite understandable – and a natural result "The idea of Web 1.0 = content, Web 2.0 = people and Web 3.0 = services has a nice symmetrical feel to it, in fact it feels basically right as such a definition would include the two other major definitions as well. So if we put these things all together what picture do we see?" - Andy Mulholland (tags: web2.0 web3.0) Ken Downs: A Working Definition of Business Logic, with Implications for CRUD Code "The Wikipedia entry on 'Business Logic' has a wonderfully honest opening sentence stating that 'Business logic, or domain logic, is a non-technical term...'"  (tags: businesslogic crud)

    Read the article

  • Issues with LVM partition size in Server 13.04

    - by Michael
    I am new to ubuntu and a little confused about how hard drive partitions and LVM works. I remember setting up Ubuntu server 13.04 and telling to to use 1TB of a 3TB server. Well I have maxed that out with blu-ray rips and want the rest of the drive for space. On log-in it says: System load: 2.24 Processes: 179 Usage of /: 88.7% of 912.89GB Users logged in: 0 Memory usage: 6% IP address for p5p1: 192.168.0.100 Swap usage: 0% => / is using 88.7% of 912.89GB lvdisplay outputs: --- Logical volume --- LV Path /dev/DeathStar-vg/root LV Name root VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 1 LV Size 2.70 TiB Current LE 707789 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Path /dev/DeathStar-vg/swap_1 LV Name swap_1 VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 2 LV Size 3.75 GiB Current LE 959 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 vgdisplay outputs: VG Name DeathStar-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TiB PE Size 4.00 MiB Total PE 715335 Alloc PE / Size 708748 / 2.70 TiB Free PE / Size 6587 / 25.73 GiB df outputs: Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/DeathStar--vg-root 957238932 848972636 59634696 94% / none 4 0 4 0% /sys/fs/cgroup udev 1864716 4 1864712 1% /dev tmpfs 374968 1060 373908 1% /run none 5120 4 5116 1% /run/lock none 1874824 148 1874676 1% /run/shm none 102400 24 102376 1% /run/user /dev/sda2 234153 56477 165184 26% /boot And fdisk /dev/sda -l outputs: Disk /dev/sda: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. I just don't know what to make of all this and am not sure how I can make it use all 2.73TBs. Thanks in advance for any help. EDIT-- Yes I did make changes to the LVM Config, but it didnt do anything. As requested, output of parted -l /dev/sda Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB bios_grub 2 2097kB 258MB 256MB ext2 3 258MB 3001GB 3000GB lvm Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-swap_1: 4022MB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 4022MB 4022MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-root: 2969GB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 2969GB 2969GB ext4

    Read the article

  • Fatal error 9001 on shared SQL Server 2008

    - by user643192
    I've asked this same question on StackOverflow, but I might actually have a better chance for an answer here so am posting here as well. I know this question has been asked here before, but none of the suggestions have worked for me. I have an ASP.NET MVC (v. 3) website on a shared server. The website was working fine for a few weeks now, until I started getting a Fatal Error 9001 error straight after login. Because this is a shared server, there are only very limited things I can do with the database (and I don't know that much about databases anyway). The help desk insist that there is nothing wrong with their server. I got various suggestions from them: Upgrading to the business plan because I am out of space (first suggestion) Even though the .mdb file is small, the .ldb can grow very quickly. The .ldb file is probably taking up all the space. I have 100MB available, the database size is 16.5MB. Can the .ldb file take up the remaining space? On querying this with the helpdesk, they admitted that my entire db is only 25MB. There is something wrong with my SQL queries and I should check the website. I'm using EF with linq to SQL. Everything was working fine until now... Can there be something that goes wrong in the queries that causes this sort of error? There is nothing wrong to be seen in the db logs, so this error cannot possibly have happened. I should log it next time it happens and contact again. I found some posts suggesting that restoring a DB backup can get rid of the issue. I do not have a recent backup, and can't take a new one because of a fatal error 9001 occurring. Since this is a shared server I have about 0 authority to execute anything against the DB (think CHECKDB, truncating the log, etc.). So I am at my wits end pretty much. What else can I do/try to get my website moving again?

    Read the article

  • Htpc aka "Media Center": cheap and *silent*?

    - by Unknown
    It may be me, or the place I live (Italy), but it seems pretty hard to get a build or a prebuilt nettop or a laptop that fits the need. I need something silent able to playback all h.264 fullhd content without stuttering, and well (and not loosing the hw acceleration because of softsubs...) silent not ugly silent and (possibly) cheap. I'm going the linux route, therefore i'm moving towards a cpu-based or nvida-integrated solution (i don't think ati hw accelerated playback - or the intel "hd" acceleration - is useable yet). Ion nettop; it's either the Acer Revo (but here it's incredibly pricey and it's hard to find the dualcore version) or the Asrock Ion 330, that in the current version is rated "silent" at 26Db. 26. Sounds pretty noisy to me!!! the previous version was even worse. was this product really aimed at htpc market?? the Dell Zino - i think it's ATI based unfortunately. Laptop: correct me if I'm wrong: sub 600€/$ units are quite loud under full load (because of the tiny fans). ULW laptops are indeed quite similar: tiniest fans = high pitched noise and the cpu still lacks power for non hd-accelerated video decoding Handmade build: little money can be saved with underpowered cpus, a low-midrange cpu would help in the case of non-hw-accelerated content the cases are quite pricey the PSU one has to get ranges between 100/150 €/$ minimum to keep the noise down a low-mid build, all included, sums up to over 650 €/$ for a still-looking-ugly-unit, without the blu-ray drive. Please help. What do you advise on this? ;) Am I ignoring laptops too much, maybe? Are low-priced Acers that noisy/high pitched under full load?

    Read the article

  • Can't install new database in OpenLDAP 2.4 with BDB on Debian

    - by Timothy High
    I'm trying to install an openldap server (slapd) on a Debian EC2 instance. I have followed all the instructions I can find, and am using the recommended slapd-config approach to configuration. It all seems to be just fine, except that for some reason it can't create my new database. ldap.conf.bak (renamed to ensure it's not being used): ########## # Basics # ########## include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema pidfile /var/run/slapd/slapd.pid argsfile /var/run/slapd/slapd.args loglevel none modulepath /usr/lib/ldap # modulepath /usr/local/libexec/openldap moduleload back_bdb.la database config #rootdn "cn=admin,cn=config" rootpw secret database bdb suffix "dc=example,dc=com" rootdn "cn=manager,dc=example,dc=com" rootpw secret directory /usr/local/var/openldap-data ######## # ACLs # ######## access to attrs=userPassword by anonymous auth by self write by * none access to * by self write by * none When I run slaptest on it, it complains that it couldn't find the id2entry.bdb file: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d bdb_db_open: database "dc=example,dc=com": db_open(/usr/local/var/openldap-data/id2entry.bdb) failed: No such file or directory (2). backend_startup_one (type=bdb, suffix="dc=example,dc=com"): bi_db_open failed! (2) slap_startup failed (test would succeed using the -u switch) Using the -u switch it works, of course. But that merely creates the configuration. It doesn't resolve the underlying problem: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d -u config file testing succeeded Looking in the database directory, the basic files are there (with right ownership, after a manual chown), but the dbd file wasn't created: root@server:/etc/ldap# ls -al /usr/local/var/openldap-data total 4328 drwxr-sr-x 2 openldap openldap 4096 Mar 1 15:23 . drwxr-sr-x 4 root staff 4096 Mar 1 13:50 .. -rw-r--r-- 1 openldap openldap 3080 Mar 1 14:35 DB_CONFIG -rw------- 1 openldap openldap 24576 Mar 1 15:23 __db.001 -rw------- 1 openldap openldap 843776 Mar 1 15:23 __db.002 -rw------- 1 openldap openldap 2629632 Mar 1 15:23 __db.003 -rw------- 1 openldap openldap 655360 Mar 1 14:35 __db.004 -rw------- 1 openldap openldap 4431872 Mar 1 15:23 __db.005 -rw------- 1 openldap openldap 32768 Mar 1 15:23 __db.006 -rw-r--r-- 1 openldap openldap 2048 Mar 1 15:23 alock (note that, because I'm doing this as root, I had to also change ownership of some of the files created by slaptest) Finally, I can start the slapd service, but it dies in the attempt (text from syslog): Mar 1 15:06:23 server slapd[21160]: @(#) $OpenLDAP: slapd 2.4.23 (Jun 15 2011 13:31:57) $#012#011@incagijs:/home/thijs/debian/p-u/openldap-2.4.23/debian/build/servers/slapd Mar 1 15:06:23 server slapd[21160]: config error processing olcDatabase={1}bdb,cn=config: Mar 1 15:06:23 server slapd[21160]: slapd stopped. Mar 1 15:06:23 server slapd[21160]: connections_destroy: nothing to destroy. I manually checked the olcDatabase={1}bdb file, and it looks fine to my amateur eye. All my specific configs are there. Unfortunately, syslog isn't reporting a specific error in this case (if it were a file permission error, it would say). I've tried uninstalling and reinstalling slapd, changing permissions, Googling my wits out, but I'm tapped out. Any OpenLDAP genius out there would be greatly appreciated!

    Read the article

  • Ripping CD Audio simultaneously from 2 drives on one PC via USB or PATA - rip accuracy preserved?

    - by Rob
    I'm considering ripping audio (reading audio) from CDs using 2 drives simultaneously to speed up the process of ripping the CDs - i.e. 2 at a time rather than 1. Are there any issues with achieving maximum rip accuracy? In general I wondered if people have tried this and if the simultaneous streams from both rip activities would overload the host machine and cause packet loss or read retries resulting in a sub-standard CD-DA Audio CD rip? If it just means the rip is slightly slower (but still faster than sequentially doing one rip followed by another) but still of maximum accuracy then that is OK for me. I will be using dbPowerAmp to rip the CDs and converting to FLAC lossless format. Specific examples: There are 2 machines I intend to do it on: A Toshiba NB100 1.6Ghz Atom netbook, 2Gb RAM, running Windows XP Home with 1 external LG DVD/CD burner and external 1 LG Blu-ray burner attached via USB 2.0, ripping to the machine's 5400rpm internal hard drive. This rips from one CD drive very well, more than adequate, it is a nippy, fast little machine for its specification. A Desktop PC running Windows 7 Home Premium with MSI P4M900M2-L/ MS-7255v2.0 motherboard and 1.86Ghz Intel Core 2 Duo E6320, 7200rpm hard drive and 2Gb RAM, with an internal LG PATA DVD/CD burner (master) and a Philips DVD/CD burner (slave) on the same PATA bus (perhaps separate buses would be another option to consider here). Thoughts?

    Read the article

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

  • Windows XP installation problems

    - by Samurai Waffle
    I recently asked a question on here, and thought I had it working... Here is a link to it. Windows XP Installation problems So basically I'm having trouble getting XP installed. To sum it up, a computer I have had a boot sector virus, and I used Darik's Nuke and Boot to wipe the hard drive clean. So the hard drive has nothing on it. I had to try and install Windows through a DOS prompt, because for some reason it won't read it off the DVD. The UBCD is able to look at the files located on the DVD I have in, but I can't boot from it for some reason. So I extracted it to a USB drive, booted to DOS and started the setup process. Here's the weird thing with DOS... It can only find the C: drive. The C: drive in DOS is the flash drive that I have in, running DOS. I can't find the hard drive anywhere! So anyways, after starting the setup process, it copied the files over to the "hard drive" (which took 16 hours because the version of DOS I ran couldn't run smartdrv.exe), and it said the computer had to reboot. So I let it reboot, and it stopped and said there is no boot device. So I popped in UBCD that I have installed on a flash drive, and I discovered that it had copied the Windows files over to the flash drive and not the hard drive. It never asked where it should extract the files... So I toyed around with UBCD, ran a memory test on the hard drive to make sure it was fine, and it came out clean. So I'm stuck now. How can I get this installed? Writing this, I came up with an idea. If I copy the DOS startup files over to the hard drive, would I be able to start DOS from it? If so, I believe that could fix my problems. Any help is greatly appreciated, because I am running out of ideas and am at my wits end with this computer.

    Read the article

  • Slow boot for OS and external devices

    - by Derek Van Cuyk
    I have been having this problem intermittently but as of yesterday, it has become more consistent. It originally started when I rebooted my PC at home and the OS (Windows 8) sat in a loop appearing to do nothing while loading. I figured since this was a new installation, that something may have just become corrupted and I decided to reinstall. So I tried to boot off of the thumb drive which had the installation iso and encountered pretty much the same issue. Same with the DVD drive. So, I rebooted once again and left it to load the entire night just to see if it ever would and sure enough this morning, Windows had finally loaded. Authentication had the same roblem albeit not quite as long (took about 5 minutes to authenticate). However, once I was in, everything appeared to be working fine and as quick as normal with the exception of when I tried to scan the C drive for any errors, which ran unbearably slow (45 minutes and before I left for work and was not finished scanning a 64GB SSD drive). I mention that I have had this issue but never when loading the OS. Before it occurred when trying to install windows 7 from a different DVD drive than the one I have now. It took me about 3 hours to do it since I had to wait sometimes 30+ min for each step to finish processing. Does anyone have an idea as to what can cause this? I am assuming it is the motherboard since it is responsible for communication with all the devices I'm having issues with but I cannot find anyone else who has had a problem like this and don't want to drop more money on a MB if it isn't the problem. Hardware: Motherboard: Asus M4A78T-E Socket AM3/ AMD 790GX/ Hybrid CrossFireX Hard Drive: Kingston SSDNow V+180 64GB Micro SATA II 3GB/S 1.8 Inch Solid State Drive SVP180S2/64G Optical Drive: Samsung Blu-Ray Combo Internal 12XReadable and DVD-Writable Drive with Lightscribe SH-B123L/BSBP Thanks, Derek

    Read the article

  • AVCHD MTS h264 1080p file with choppy playback in Linux

    - by marc
    When I'm trying play video files from my camera: Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) -> 50.00 (50/1) Input #0, mpegts, from '00027.MTS': Duration: 00:00:38.88, start: 2.884289, bitrate: 16945 kb/s Program 1 Stream #0.0[0x1011]: Video: h264 (High), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 50 tbc Stream #0.1[0x1100]: Audio: ac3, 48000 Hz, stereo, s16, 256 kb/s … on my Linux computer (Ubuntu 12.04), I get choppy playback. It's completly unusable... I tried: Totem VLC mplayer The result is always same issue. I sent the same video file to a friend who has ubuntu 10.04 to test, and he also has the same issue. He has Windows 7, and confirms that on Windows, the video work well. I have an Intel® Core™2 CPU 6300 @ 1.86GHz × 2 with GF 9600 GT, with closed NVIDIA drivers. This is not any kind of issue with big files playing slow from an HDD issue. I have an SSD drive! I spent the last days and nights, trying hundreds of commands for ffmpeg, handbrake, mencoder... Any of them won't let me create a file with enough quality. I downloaded few movies from YouTube in 1080p, and playback worked well without any big pixels and choppiness. I would like have highest possible quality, I will put following files onto a Blu-ray disk so I don't need to compress them to get a smaller size. I just want smoth playback on my Linux box. On Windows, the same file is working well.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29  | Next Page >