Search Results

Search found 74621 results on 2985 pages for 'oracle platform migration data migration'.

Page 293/2985 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • testdisk - recover partition table

    - by Evaggelos Balaskas
    I destroyed my partition table of my laptop. Testdisk reports the below Disk laptop.img - 250 GB / 232 GiB - CHS 30402 255 63 (RO) Partition Start End Size in sectors >P MS Data 435868 456606 20739 [NO NAME] P MS Data 19232600 19235479 2880 [NO NAME] D MS Data 41945087 83890143 41945057 D MS Data 57151486 168579069 111427584 D MS Data 67637246 141037565 73400320 D MS Data 151523326 193466365 41943040 D MS Data 170617328 170618223 896 D MS Data 170631168 170634047 2880 D MS Data 171338232 171344405 6174 [Boot] D MS Data 172008235 172231918 223684 [NO NAME] P MS Data 193466368 214437887 20971520 D MS Data 217321375 225321678 8000304 [root] D MS Data 224923646 308809725 83886080 [media] D MS Data 308809728 420237311 111427584 D MS Data 418910206 481824765 62914560 [vmimages] my partition table had 3 Primary Partitions. 1. WinXP Home 2. /boot 3. LVM inside LVM i had 9 or 10 LVM partitions One of them was my home (encrypted with luks) testdisk cant recover my partition table or any other partition. Partitions with [P] doesnt have any useful data. I want to use dd to extract the partitions and try to recover as many files i can. Any ideas of how i can extract eg. the [root] lvm partition from the above testdisk report ? I am afraid that my disk was also corrupted.

    Read the article

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • 7 Steps To Cut Recruiting Costs & Drive Exceptional Business Results

    - by Oracle Accelerate for Midsize Companies
    By Steve Viarengo, Vice President Product Management, Oracle Taleo Cloud Services  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 In good times, trimming operational costs is an ongoing goal. In tough times, it’s a necessity. In both good times and bad, however, recruiting occurs. Growth increases headcount in good times, and opportunistic or replacement hiring occurs in slow business cycles. By employing creative recruiting strategies in tandem with the latest technology developments, you can reduce recruiting costs while driving exceptional business results. Here are some critical areas to focus on. 1.  Target Direct Cost Savings Total recruiting process expenses are the sum of external costs plus internal labor costs. Most organizations can reduce recruiting expenses with direct cost savings. While additional savings on indirect costs can be realized from process improvement and efficiency gains, there are direct cost savings and benefits readily available in three broad areas: sourcing, assessments, and green recruiting. 2. Sourcing: Reduce Agency Costs Agency search firm fees can amount to 35 percent of a new employee’s annual base salary. Typically taken from the hiring department budget, these fees may not be visible to HR. By relying on internal mobility programs, referrals, candidate pipelines, and corporate career Websites, organizations can reduce or eliminate this agency spend. And when you do have to pay third-party agency fees, you can optimize the value you receive by collaborating with agencies to identify referred candidates, ensure access to candidate data and history, and receive automatic notifications and correspondence. 3. Sourcing: Reduce Advertising Costs You can realize significant cost reductions by placing all job positions on your corporate career Website. This will allow you to reap a substantial number of candidates at minimal cost compared to job boards and other sourcing options. 4.  Sourcing: Internal Talent Pool Internal talent pools provide a way to reduce sourcing and advertising costs while delivering improved productivity and retention. Internal redeployment reduces costs and ramp-up time while increasing retention and employee satisfaction. 5.  Sourcing: External Talent Pool Strategic recruiting requires identifying and matching people with a given set of skills to a particular job while efficiently allocating sourcing expenditures. By using an e-recruiting system (which drives external talent pool management) with a candidate relationship database, you can automate prescreening and candidate matching while communicating with targeted candidates. Candidate relationship management can lower sourcing costs by marketing new job opportunities to candidates sourced in the past. By mining the talent pool in this fashion, you eliminate the need to source a new pool of candidates for each new requisition. Managing and mining the corporate candidate database can reduce the sourcing cost per candidate by as much as 50 percent. 6.  Assessments: Reduce Turnover Costs By taking advantage of assessments during the recruitment process, you can achieve a range of benefits, including better productivity, superior candidate performance, and lower turnover (providing considerable savings). Assessments also save recruiter and hiring manager time by focusing on a short list of qualified candidates. Hired for fit, such candidates tend to stay with the organization and produce quality work—ultimately driving revenue.  7. Green Recruiting: Reduce Paper and Processing Costs You can reduce recruiting costs by automating the process—and making it green. A paperless process informs candidates that you’re dedicated to green recruiting. It also leads to direct cost savings. E-recruiting reduces energy use and pollution associated with manufacturing, transporting, and recycling paper products. And process automation saves energy in mailing, storage, handling, filing, and reporting tasks. Direct cost savings come from reduced paperwork related to résumés, advertising, and onboarding. Improving the recruiting process through sourcing, assessments, and green recruiting not only saves costs. It also positions the company to improve the talent base during the recession while retaining the ability to grow appropriately in recovery. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • How can I transfer article content from old Joomla 1.5 site to new 2.5 site

    - by PaulHurleyuk
    I have an existing Joomla 1.5 site and am intending to wipe it and install a brand new 2.5 site. I will pick new plugins, template etc but would like to transfer the basic text / images of the articles on the 1.5 site to the new site. I am less concerned with categories and tags of those old articles, as they'll probably go in an 'old' category. I have several file and database backups of the 1.5 site. Has anyone done anything similar ? Are the two article db schemas similar enough to just transfer the data ?

    Read the article

  • Web Services Example - Part 2: Programmatic

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 2 of our Web Service examples.  In this posting we'll take a look at using a SOAP Web Service but calling it programmatically in code and parsing the return into a bean. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part1 Defining our Web Service: Just like our first installment, we are using the same public weather forecast web service provided free by CDYNE Corporation.  Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. We're going to concentrate on the same two web service methods, GetCityForecastByZIP and GetWeatherInformation. Defing the Application: The application setup is identical to the Weather1 version.  There are some improvements to the data that is displayed as part of this example though.  Now we are able to show the associated image along with each forecast line when using the Forecast By Zip feature.  We've also added the temperature Hi/Low values into the UI. Summary of Fundamental Changes In This Application The most fundamental change is that we're binding the UI to the Bean Data Controls instead of directly to the Web Service Data Controls.  This gives us much more flexibility to control the shape of the data and allows us to do caching of the data outside of the Web Service.  This way if your application is, say offline, your bean could still populate with data from a local cache and still show you some UI as opposed to completely failing because you don't have any connectivity. In general we promote this type of programming technique with ADF Mobile to insulate your application from any issues with network connectivity. What's different with this example? We have setup the Web Service DC the same way but now we have managed beans to process the data.  The following classes define the "Model" of our application:  CityInformation-CityForecast-Forecast, WeatherInformation-WeatherDescription.  We use WeatherBean for UI interaction to the model layer.  If you look through this example, we don't really do that much with the java code except use it to grab the image URL from the weather description.  In a more realistic example, you might be using some JDBC classes to persist the data to a local database. To have a good architecture it is always good to keep your model and UI layers separate.  This gets muddied if you start to use bindings on a page invoked from Java code and this java code starts to become your "model" layer.  Since bindings are page specific, your model layer starts to become entwined with your UI.  Not good!  To help with this, we've added some utility functions that let you invoke DC methods without having a binding and thus execute methods from your "model" layer without requiring a binding in your page definition.  We do this with the invokeDataControlMethod of the AdfmfJavaUtilities class.  An example of this method call is available in line 95 of WeatherInformation.java and line 93 of CityInformation.Java. What's a GenericType? Because Web Service Data Controls (and also URL Data Controls AKA REST) use generic name/value pairs to define their structure and don't have strongly typed objects, these are actually stored internally as GenericType objects.  The GenericType class is simply a property map of name/value pairs that can be hierarchical.  There are methods like getAttribute where you supply the index of the attribute or it's string property name.  Why is this important to know?  Because invokeDataControlMethod returns GenericType objects and developers either need to parse these GenericType objects themselves or use one of our helper functions. GenericTypeBeanSerializationHelper This class does exactly what it's name implies.  It's a helper class for developers to aid in serialization of GenericTypes to/from java objects.  This is extremely handy if you have a large GenericType object with many attributes (or you're just lazy like me!) and you just want to parse it out into a real java object you can use more easily.  Here you would use the fromGenericType method.  This method takes the class of the Java object you wish to return and the GenericType as parameters.  The method then parses through each attribute in the GenericType and uses reflection to set that same attribute in the Java class.  Then the method returns that new object of the class you specified.  This is obviously very handy to avoid a lot of shuffling code between GenericType and your own Java classes.  The reverse method, toGenericType is also available when you want to go the other way.  In this case you supply the string that represents the package location in the DataControl definition (Example: "MyDC.myParams.MyCollection") and then pass in the Java object you have that holds the data and a GenericType is returned to you.  Again, it will use reflection to calculate the attributes that match between the java class and the GenericType and call the getters/setters on those. Issues and Possible Improvements: In the next installment we'll show you how to make your web service calls asynchronously so your UI will fill dynamically when the service call returns but in the meantime you show the data you have locally in your bean fed from some local cache.  This gives your users instant delivery of some data while you fetch other data in the background.

    Read the article

  • Essbase Data precision unraveled

    - by THE
    (guest reference added by Nancy) Anyone who has been working with Data import and exoport as well as the Essbase Excel Add In has probably come across a phenomenon that is called data precision: Lots of zeroes are added to any given number that has been calculated by Essbase, and this gets displayed as "10.0000000000001" or "9.99999999999999" instead of a simple "10" . This question is one of the recurring ones that Support get asked over and over again, and we therefore feel the need to give an explanation to it: I would like to point you to the note The Limits of Data Precision in Essbase (Doc ID 1311188.1) which explains in detail why these numbers are showing up and what to do about it.

    Read the article

  • Contricted A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to constrict a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

  • Extracting, Transforming, and Loading (ETL) Process

    The process of Extracting, Transforming, and Loading data in to a data warehouse is called Extract Transform Load (ETL) process.  This process can be used to obtain, analyze, and clean data from various data sources so that it can be stored in a uniform manner within a data warehouse. This data can then be used by various business intelligence processes to provide an organization with more of an in depth analysis of the current state of the company and where it is heading. A standard ETL process that might be used by a health care system may include importing all of their patients names, diagnoses and prescriptions in to a unified data warehouse so that trends can be spotted in regards to outbreaks like the flu and also predict potential illness that a patient might be affected by based on other patients with similar symptoms.

    Read the article

  • New eBook: In-Memory Data Grids for Dummies

    - by jeckels
    We've just released a new eBook In-Memory Data Grids for Dummies. This is a fantastic resource if you're looking to explain in-memory data grids to colleagues, convince your boss of their value, or even discover some new use cases for your existing investment. In true "Dummies" style, this eBook will walk you through the basics tenets of in-memory data grids, their common use cases, where IMDGs sit in your architecture, and some key considerations when looking to implement them. While the title may say "Dummies," we know you'll find some useful overview and technical information in the resource. It's published by us on the Coherence team in partnership with Wiley (the "Dummies" company), but it's not only about Coherence or Oracle. In fact, we took pains to make this book fairly neutral to give you the best information, not a product pitch. Happy reading! Download the eBook now 

    Read the article

  • How to query Oracle from SQL Server ?

    - by Albert Widjaja
    Hi Everyone, I'm having difficulties in creating a connection from my SQL Server 2008 Enterprise SP2 x64 into the Oracle database 10g even though I have already install the Oracle Client 11g R2 ? I've followed this article from steps URL: http://www.ideaexcursion.com/2009/01/05/connecting-to-oracle-from-sql-server/ plus added: TNS_ADMIN into the Server variables which point into: C:\Oracle\product\11.2.0\client_1\network\admin what is working now: TNSNAMES.ORA has been copied successfully from the other Developer wworkstation i can TNSPING into the DB instance i can connect to the database using SQLplus and perform any SQL commands i can create the DSN ONLY when using "[b]C:\Windows\SysWOW64\odbcad32.exe[/b]" the normal odbcad32 doesn't show my DSN that I have just created ? the DSN created from the above works fine from the test connection. my goal: To be able to select the Oracle connection in the Linked server object but still no effect after I restart the server. (Windows Server 2008 Enterprise 64 bit SP2). Any idea please in resolving this problem would be greatly appreciated. Thanks.

    Read the article

  • Windows Server 2008 Migration - Did I miss something?

    - by DevNULL
    I'm running in to a few complications in my migration process. My main role has been a Linux / Sun administrator for 15 yrs so Windows server 2008 environment is a bit new to me, but understandable. Here's our situation and reason for migrating... We have a group of developers that develop VERY low-level software in Visual C with some inline assembler. All the workstations were separate from each other which cased consistency problems with development libraries, versions, etc... Our goal was to throw them all on to a Windows domain were we can control workstation installations, hot fixes (which can cause enormous problems), software versions, etc... All Development Workstations are running Windows XP x32 (sp3) and x64 (sp2) I running in to user permission problems and I was wondering maybe I missed one, tWO or a handful of things during my deployment. Here is what I have currently done: Installed and Activated Windows Server 2008 Added Roles for DNS and Active Directory Configured DNS with WINS for netbios name usage Added developers to AD and mapped their shared folders to their profile Added roles for IIS7 and configured the developers SVN Installed MySQL Enterprise Edition for development usage Not having a firm understanding of Group Policy I haven't delved deeply in to that realm yet. Problems I'm encountering: 1. When I configure any XP workstations to logon our domain, once a user uses their new AD login, everything goes well, except they have very restrictive permissions. (Eg: If a user opens any existing file, they don't have write access, except in their documents folder.) Since these guys are working on low system level events, they need to r/w all files. All I'm looking to restrict in software installations. Am I correct to assume that I can use WSUS to maintain the domains hot fixes and updates pushed to the workstations? I need to map a centralized shared development drive upon the users login. This is open to EVERYONE. Right now I have the users folders mapped upon login through their AD profile. But how do I map a share if I've already defined one within their profile in AD? Any responses would be very grateful. Do I have to configure and define a group policy for the domain users? Can I use Volume Mirroring to mirror / sync two drives on two separate servers or should I just script a rsync or MS Synctool? The drives simply store nightly system images.

    Read the article

  • TFS 2010 migration from one server to another

    - by Kabir Rao
    We have followed- http://msdn.microsoft.com/en-us/library/ms404869(v=vs.100).aspx every steps of this extremely poorly worded article. We are not able see Dashboards of SharePoint projects. In some cases(mostly scrum projects, i guess), i get "The Webpage can not be found". In other cases- Unable to refresh data for a data connection in the workbook. Try again or contact your system administrator. The following connections failed to refresh: TfsOlapReport Any help would be very much appreciated.

    Read the article

  • iPhone core data problem : referenceData64 only defined for abstract class

    - by occe
    I have an application that downloads/parses a big XML file and store the information using core data (approx. 4000 objects (entities)). The XML is loaded/parsed in a different thread, which has its own NSManagedObjectContext. When trying to save the entities to the persistent store, I sometimes get the following error (about 20%) 2010-03-03 23:41:42.802 xxx[7487:4203] Exception in XML saving 2010-03-03 23:41:42.802 xxx[7487:4203] Description: * -_referenceData64 only defined for abstract class. Define -[NSTemporaryObjectID_default _referenceData64]! 2010-03-03 23:41:42.803 xxx[7487:4203] Name: NSInvalidArgumentException 2010-03-03 23:41:42.804 xxx[7487:4203] UserInfo: (null) 2010-03-03 23:41:42.805 xxx[7487:4203] Reason: * -_referenceData64 only defined for abstract class. Define -[NSTemporaryObjectID_default _referenceData64]! I have a simple integer to keep track of the entities the application creates compared to the insertedObjects property in the NSManagedObjectContext before saving, and when I get the error, these numbers do not match, insertedObjects in the NSManagedObjectContext is missing about 10 entities. I do not know how I should continue to investigate this problem, anyone has any idea how to fix this? Thanks /oscar

    Read the article

  • How to create this MongoMapper custom data type?

    - by Kapslok
    I'm trying to create a custom MongoMapper data type in RoR 2.3.5 called Translatable: class Translatable < String def initialize(translation, culture="en") end def languages end def has_translation(culture)? end def self.to_mongo(value) end def self.from_mongo(value) end end I want to be able to use it like this: class Page include MongoMapper::Document key :title, Translatable, :required => true key :content, String end Then implement like this: p = Page.new p.title = "Hello" p.title(:fr) = "Bonjour" p.title(:es) = "Hola" p.content = "Some content here" p.save p = Page.first p.languages => [:en, :fr, :es] p.has_translation(:fr) => true en = p.title => "Hello" en = p.title(:en) => "Hello" fr = p.title(:fr) => "Bonjour" es = p.title(:es) => "Hola" In mongoDB I imagine the information would be stored like: { "_id" : ObjectId("4b98cd7803bca46ca6000002"), "title" : { "en" : "Hello", "fr" : "Bonjour", "es" : "Hola" }, "content" : "Some content here" } So Page.title is a string that defaults to English (:en) when culture is not specified. I would really appreciate any help.

    Read the article

  • Algorithm for converting hierarchical flat data (w/ ParentID) into sorted flat list w/ indentation l

    - by eagle
    I have the following structure: MyClass { guid ID guid ParentID string Name } I'd like to create an array which contains the elements in the order they should be displayed in a hierarchy (e.g. according to their "left" values), as well as a hash which maps the guid to the indentation level. For example: ID Name ParentID ------------------------ 1 Cats 2 2 Animal NULL 3 Tiger 1 4 Book NULL 5 Airplane NULL This would essentially produce the following objects: // Array is an array of all the elements sorted by the way you would see them in a fully expanded tree Array[0] = "Airplane" Array[1] = "Animal" Array[2] = "Cats" Array[3] = "Tiger" Array[4] = "Book" // IndentationLevel is a hash of GUIDs to IndentationLevels. IndentationLevel["1"] = 1 IndentationLevel["2"] = 0 IndentationLevel["3"] = 2 IndentationLevel["4"] = 0 IndentationLevel["5"] = 0 For clarity, this is what the hierarchy looks like: Airplane Animal Cats Tiger Book I'd like to iterate through the items the least amount of times possible. I also don't want to create a hierarchical data structure. I'd prefer to use arrays, hashes, stacks, or queues. The two objectives are: Store a hash of the ID to the indentation level. Sort the list that holds all the objects according to their left values. When I get the list of elements, they are in no particular order. Siblings should be ordered by their Name property. Update: This may seem like I haven't tried coming up with a solution myself and simply want others to do the work for me. However, I have tried coming up with three different solutions, and I've gotten stuck on each. One reason might be that I've tried to avoid recursion (maybe wrongly so). I'm not posting the partial solutions I have so far since they are incorrect and may badly influence the solutions of others.

    Read the article

  • Complex data types in WCF?

    - by Hojou
    I've run into a problem trying to return an object that holds a collection of childobjects that again can hold a collection of grandchild objects. I get an error, 'connection forcibly closed by host'. Is there any way to make this work? I currently have a structure resembling this: pseudo code: Person: IEnumerable<Order> Order: IEnumberable<OrderLine> All three objects have the DataContract attribute and all public properties i want exposed (including the IEnumerable's) have the DataMember attribute. I have multiple OperationContract's on my service and all the methods returning a single object OR an IEnumerable of an object works perfectly. It's only when i try to nest IEnumerable that it turns bad. Also in my client service reference i picked the generic list as my collection type. I just want to emphasize, only one of my operations/methods fail with this error - the rest of them work perfectly. EDIT (more detailed error description): [SocketException (0x2746): An existing connection was forcibly closed by the remote host] [IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.] [WebException: The underlying connection was closed: An unexpected error occurred on a receive.] [CommunicationException: An error occurred while receiving the HTTP response to http://myservice.mydomain.dk/MyService.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details.] I tried looking for logs but i can't find any... also i'm using a WSHttpBinding and an http endpoint.

    Read the article

  • Marshal.StringToCoTaskMemAnsi converting non-Latin characters when sending raw data to a printer

    - by rem
    For sending raw data to a thermal DATAMAX printer I'm using RawPrinterHelper class from this Microsoft KB article. When a string sent to printer contains only Latin characters, everything is OK. But non-Latin, in my case Russian characters in a string, are not printed correct. I think the problem is in using Marshal.StringToCoTaskMemAnsi method for converting the string: public static bool SendStringToPrinter(string szPrinterName, string szString) { IntPtr pBytes; Int32 dwCount; // How many characters are in the string? dwCount = szString.Length; // Assume that the printer is expecting ANSI text, and then convert // the string to ANSI text. pBytes = Marshal.StringToCoTaskMemAnsi(szString); // Send the converted ANSI string to the printer. SendBytesToPrinter(szPrinterName, pBytes, dwCount); Marshal.FreeCoTaskMem(pBytes); return true; } Just to note, Russian characters in the string are put in hex format, like "\x83", but nevertheless the method doesn't put this hex value in unmanaged memory as it is, but converts it, I think, according with ANSI code page to a character and then printer can not read it correctly. If I try to compose a file, using Hex editor and put correct hex values in place of non-Latin characters and then send the file to a printer using another method from the same class SendFileToPrinter, everything, including Russian characters is printed correctly. How in this case the problem with sending string, containing non-Latin characters, could be solved?

    Read the article

  • How to fetch distinct values in Core Data?

    - by Andy
    So in looking through Core Data Snippets, I found the following code: ... [request setEntity:entity]; [request setResultType:NSDictionaryResultType]; [request setReturnsDistinctValues:YES]; [request setPropertiesToFetch:[NSArray arrayWithObject:@"<#Attribute name#>"]]; // Execute the fetch NSError *error; id requestedValue = nil; // WTF? This isn't defined or used anywhere NSArray *objects = [managedObjectContext executeFetchRequest:request error:&error]; if (objects == nil) { // handle the error } This is great and seems perfect for what I need...but how does one actually use it? I assume since it's returning dictionaries, I need a key to get at the values - but where's the key defined? Is that the "id requestedValue = nil" line? If so, how does "requestedValue" become the key? Xcode gives me a compiler warning about an unused variable at the "requestedValue" declaration. I feel like I'm missing something here. Thanks in advance for any assistance you can offer.

    Read the article

  • Substitute values (for specific dates) from a second data frame to the first data frame

    - by user1665355
    I have two time series data frames: The first one: head(df1) : GMT MSCI ACWI DJGlbl Russell 1000 Russell Dev S&P GSCI Industrial S&P GSCI Precious 1999-03-01 -0.7000000 0.2000000 -0.1000000 -1.5000000 -1.0000000 -0.4000000 1999-03-02 -0.5035247 0.0998004 -0.7007007 -0.2030457 0.4040404 -0.3012048 1999-03-03 -0.2024291 0.2991027 0.0000000 -0.6103764 0.1006036 -0.1007049 1999-03-04 0.7099391 0.2982107 1.5120968 -0.1023541 0.5025126 0.4032258 1999-03-05 2.4169184 0.8919722 2.1847071 2.7663934 -1.2000000 0.0000000 1999-03-08 0.3933137 0.3929273 0.5830904 -0.0997009 -0.2024291 1.1044177 tail(df1) : GMT MSCI ACWI DJGlbl Russell 1000 Russell Dev S&P GSCI Industrial S&P GSCI Precious 2011-12-23 0.68241470 0.84790673 0.9441385 0.6116208 0.5822862 -0.2345300 2011-12-26 -0.05213764 0.00000000 0.0000000 0.0000000 0.0000000 0.0000000 2011-12-27 0.20865936 0.05254861 0.3117693 0.2431611 0.0000000 -0.7233273 2011-12-28 -0.62467465 -1.20798319 -1.1655012 -0.9702850 -2.0414381 -2.4043716 2011-12-29 0.52383447 0.47846890 0.8647799 0.5511329 -0.0933126 -1.2504666 2011-12-30 0.26055237 1.03174603 -0.4676539 1.2180268 1.9613948 1.7388017 The second one: head(df2) : GMT MSCI.ACWI DJGlbl Russell.1000 Russell.Dev S.P.GSCI.Industrial S.P.GSCI.Precious 1999-06-01 0.00000000 0.24438520 0.0000000 0 -0.88465521 0.008522842 1999-07-01 0.12630441 0.06755621 0.0000000 0 0.29394697 0.000000000 1999-08-02 0.07441812 0.18922829 0.0000000 0 0.02697299 -0.107155063 1999-09-01 -0.36952701 0.08684107 0.1117509 0 0.24520976 0.000000000 1999-10-01 0.00000000 0.00000000 0.0000000 0 0.00000000 1.941266205 1999-11-01 0.41879925 0.00000000 0.0000000 0 0.00000000 -0.197897901 tail(df2) : GMT MSCI.ACWI DJGlbl Russell.1000 Russell.Dev S.P.GSCI.Industrial S.P.GSCI.Precious 2011-07-01 0.00000000 0.0000000 0.0000000 0.0000000 0.00000000 -0.1141162 2011-08-01 0.00000000 0.0000000 0.0000000 0.0000000 0.02627347 0.0000000 2011-09-01 -0.02470873 0.2977585 -0.0911891 0.6367605 0.00000000 0.2830977 2011-10-03 0.42495188 0.0000000 0.4200743 -0.4420027 -0.41012646 0.0000000 2011-11-01 0.00000000 0.0000000 0.0000000 -0.6597739 0.00000000 0.0000000 2011-12-01 0.50273034 0.0000000 0.0000000 0.6476393 0.00000000 0.0000000 The first df cointains daily observations. The second df contains only the "first day of each month" forecasted values. I would like to substitute the values from the second df into the first one. In other words, the "first day of each month" values in the first df will be substituted for the "first day of each month" values from the second df. I tried to write an lapply loop that substitutes the values and was only trying to use match function. But I failed. I could not find the similar question at StackOverflow either... Greatful for any suggestions!

    Read the article

  • What are the correct bindings for an NSComboBox for use with Core Data

    - by theMikeSwan
    Imagine if you will a Core Data app with two entities (Employee, and Department). Employees have a to-one relationship with department (department) and the inverse is a to-many relationship (employees). In the UI you can select individual Employee entities and edit the details in a detail area (there are of course other attributes and there is UI for adding and editing Department entities). When using a popup button the bindings are: content = PopUpArrayController.arrangedObjects content values = PopUpArrayController.arrangedObjects.name (name is an NSString) selected object = EmployeeArrayController.selection.department.name This allows for viewing of all departments in the popup menu, correct selection of the current Employee's department, and allows that department to be changed as expected. The goal is to change this for an NSComboBox so that the user can tab to the box and type the department name in without switching to the mouse. I have tried numerous different bindings to accomplish this. I even had it work for one run with these bindings: content = PopUpArrayController.arrangedObjects.name value = EmployeeArrayController.selection.department.name At least once this worked as expected (it even added a new department when the entered text did not match any existing department). Now however it will display the available Departments and auto complete but will not update the model with the correct value when the value is changed in the combo box. If the Department is set or changed with the popup the correct department is shown in the combo box. Does anyone know what I am missing? Thanks.

    Read the article

  • LINQ Normalizing data

    - by Brennan Mann
    I am using an OMS that stores up to three line items per record in the database. Below is an example of an order containing five line items. Order Header Order Detail Prod 1 Prod 2 Prod 3 Order Detail Prod 4 Prod 5 One order header record and two detail records. My goal is have a one to one relation for details records(i.e., one detail record per line item). In the past, I used an UNION ALL SQL statement to extract the data. Is there a better approcach to this problem using LINQ? Below is my first attempt at using LINQ. Any feedback, suggestions or recommendations would greatly be appreciated. For what I have read, an UNION statement can tax the process? var orderdetail = (from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_001, price = o.EXTPRICES_001, qty = o.ITEMQTYS_001 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_002, price = o.EXTPRICES_002, qty = o.ITEMQTYS_002 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_003, price = o.EXTPRICES_003, qty = o.ITEMQTYS_003 });

    Read the article

  • Core Data NSPredicate to filter results

    - by Bryan
    I have a NSManagedObject that contains a bID and a pID. Within the set of NSManagedObjects, I only want a subset returned and I'm struggling to find the correct NSPredicate or way to get what I need out of Core Data. Here's my full list: bid pid 41 0 42 41 43 0 44 0 47 41 48 0 49 0 50 43 There is a parent-child relationship above. Rules: If a record's PID = 0, it means that that record IS a parent record. If a record's PID != 0, then that record's PID refers to it's parent record's BID. Example: 1) BID = 41 is a parent record. Why? Because records BID=42 and record BID=47 have PID's of 41, meaning those are children of its PID record. 2) BID = 42 has a parent record with a BID = 41. 3) BID = 43 is a parent record. 4) BID = 44 is a parent record. 5) BID = 47 has a parent record with a BID = 41 because its PID = 41. See #1 above. 6) BID = 48 is a parent record. 7) BID = 49 is a parent record. 8) BID = 50 is a child record, and its parent record has a BID = 43. See the pattern? Now, basically from that, I want only the following rows fetched: bid pid 44 0 47 41 48 0 49 0 50 43 BID = 41, BID = 48, BID = 49 should all be returned because there are no records with a PID equal to their BID. BID = 47 should be returned because it is the most recent child of PID = 41. BID = 50 should be returned because it is the most recent child of PID = 43. Hope this helps explain it more.

    Read the article

  • How should I go about implementing a points-to analysis in Maude?

    - by reprogrammer
    I'm going to implement a points-to analysis algorithm. I'd like to implement this analysis mainly based on the algorithm by Whaley and Lam. Whaley and Lam use a BDD based implementation of Datalog to represent and compute the points-to analysis relations. The following lists some of the relations that are used in a typical points-to analysis. Note that D(w, z) :- A(w, x),B(x, y), C(y, z) means D(w, z) is true if A(w, x), B(x, y), and C(y, z) are all true. BDD is the data structure used to represent these relations. Relations input vP0 (variable : V, heap : H) input store (base : V, field : F, source : V) input load (base : V, field : F, dest : V) input assign (dest : V, source : V) output vP (variable : V, heap : H) output hP (base : H, field : F, target : H) Rules vP(v, h) :- vP0(v, h) vP(v1, h) :- assign(v1, v2), vP(v2, h) hP(h1, f,h2) :- store(v1, f, v2), vP(v1, h1), vP(v2, h2) vP(v2, h2) :- load(v1, f, v2), vP(v1, h1), hP(h1, f, h2) I need to understand if Maude is a good environment for implementing points-to analysis. I noticed that Maude uses a BDD library called BuDDy. But, it looks like that Maude uses BDDs for a different purpose, i.e. unification. So, I thought I might be able to use Maude instead of a Datalog engine to compute the relations of my points-to analysis. I assume Maude propagates independent information concurrently. And this concurrency could potentially make my points-to analysis faster than sequential processing of rules. But, I don't know the best way to represent my relations in Maude. Should I implement BDD in Maude myself, or Maude's internal unification based on BDD has the same effect?

    Read the article

  • haskell: a data structure for storing ascending integers with a very fast lookup

    - by valya
    Hello! (This question is related to my previous question, or rather to my answer to it.) I want to store all qubes of natural numbers in a structure and look up specific integers to see if they are perfect cubes. For example, cubes = map (\x -> x*x*x) [1..] is_cube n = n == (head $ dropWhile (<n) cubes) It is much faster than calculating the cube root, but It has complexity of O(n^(1/3)) (am I right?). I think, using a more complex data structure would be better. For example, in C I could store a length of an already generated array (not list - for faster indexing) and do a binary search. It would be O(log n) with lower ?oefficient than in another answer to that question. The problem is, I can't express it in Haskell (and I don't think I should). Or I can use a hash function (like mod). But I think it would be much more memory consuming to have several lists (or a list of lists), and it won't lower the complexity of lookup (still O(n^(1/3))), only a coefficient. I thought about a kind of a tree, but without any clever ideas (sadly I've never studied CS). I think, the fact that all integers are ascending will make my tree ill-balanced for lookups. And I'm pretty sure this fact about ascending integers can be a great advantage for lookups, but I don't know how to use it properly (see my first solution which I can't express in Haskell).

    Read the article

  • Core data relationship memory leak

    - by cfihelp
    I have a strange (to me) memory leak when accessing an entity in a relationship. Series and Tiles have an inverse relationship to each other. // set up the fetch request NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Series" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // grab all of the series in the core data store NSError *error = nil; availableSeries = [[NSArray alloc] initWithArray:[managedObjectContext executeFetchRequest:fetchRequest error:&error]]; [fetchRequest release]; // grab one of the series Series *currentSeries = [availableSeries objectAtIndex:1]; // load all of the tiles attached to the series through the relationship NSArray *myTiles = [currentSeries.tile allObjects]; // 16 byte leak here! Instruments reports back that the final line has a 16 byte leak cause by NSPlaceHolderString. Stack trace: 2 UIKit UIApplicationMain 3 UIKit -[UIApplication _run] 4 CoreFoundation CFRunLoopRunInMode 5 CoreFoundation CFRunLoopRunSpecific 6 GraphicsServices PurpleEventCallback 7 UIKit _UIApplicationHandleEvent 8 UIKit -[UIApplication sendEvent:] 9 UIKit -[UIApplication handleEvent:withNewEvent:] 10 UIKit -[UIApplication _runWithURL:sourceBundleID:] 11 UIKit -[UIApplication _performInitializationWithURL:sourceBundleID:] 12 Memory -[AppDelegate_Phone application:didFinishLaunchingWithOptions:] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/AppDelegate_Phone.m:49 13 UIKit -[UIViewController view] 14 Memory -[HomeScreenController_Phone viewDidLoad] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/HomeScreenController_Phone.m:58 15 CoreData -[_NSFaultingMutableSet allObjects] 16 CoreData -[_NSFaultingMutableSet willRead] 17 CoreData -[NSFaultHandler retainedFulfillAggregateFaultForObject:andRelationship:withContext:] 18 CoreData -[NSSQLCore retainedRelationshipDataWithSourceID:forRelationship:withContext:] 19 CoreData -[NSSQLCore newFetchedPKsForSourceID:andRelationship:] 20 CoreData -[NSSQLCore rawSQLTextForToManyFaultStatement:stripBindVariables:swapEKPK:] 21 Foundation +[NSString stringWithFormat:] 22 Foundation -[NSPlaceholderString initWithFormat:locale:arguments:] 23 CoreFoundation _CFStringCreateWithFormatAndArgumentsAux 24 CoreFoundation _CFStringAppendFormatAndArgumentsAux 25 Foundation _NSDescriptionWithLocaleFunc 26 CoreFoundation -[NSObject respondsToSelector:] 27 libobjc.A.dylib class_respondsToSelector 28 libobjc.A.dylib lookUpMethod 29 libobjc.A.dylib _cache_addForwardEntry 30 libobjc.A.dylib _malloc_internal I think I'm missing something obvious but I can't quite figure out what. Thanks for your help! Update: I've copied the offending chunk of code to the first part of applicationDidFinishLaunching and it still leaks. Could there be something wrong with my model?

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >