Search Results

Search found 78101 results on 3125 pages for 'back up data'.

Page 23/3125 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • How do I set default values on new properties for existing entities after light weight core data migration?

    - by Moritz
    I've successfully completed light weight migration on my core data model. My custom entity Vehicle received a new property 'tirePressure' which is an optional property of type double with the default value 0.00. When 'old' Vehicles are fetched from the store (Vehicles that were created before the migration took place) the value for their 'tirePressure' property is nil. (Is that expected behavior?) So I thought: "No problem, I'll just do this in the Vehicle class:" - (void)awakeFromFetch { [super awakeFromFetch]; if (nil == self.tirePressure) { [self willChangeValueForKey:@"tirePressure"]; self.tirePressure = [NSNumber numberWithDouble:0.0]; [self didChangeValueForKey:@"tirePressure"]; } } Since "change processing is explicitly disabled around" awakeFromFetch I thought the calls to willChangeValueForKey and didChangeValueForKey would mark 'tirePresure' as dirty. But they don't. Every time these Vehicles are fetched from the store 'tirePressure' continues to be nil despite having saved the context.

    Read the article

  • What is a good approach for a Data Access Layer?

    - by Adil Mughal
    Our software is a customized Human Resource Management System (HRMS) using ASP.NET with Oracle as the database and now we are actually moving to make it a product that supports multiple tenants with their own databases. Our options: Use NHibernate to support Multiple databases and use of OO. But we concern related to NHibernate learning curve and any problem we faced. Make a generalized DAL which will continue working with Oracle using stored procedures and use tools to convert it to other databases such as SQL Server or MySql. There is a risk associated with having to support multiple database-dependent versions of a single script. Provide the software as a Service (SaaS) and maintain the way we conduct business. However there can may be clients who do not want or trust the Cloud or other SaaS business models. With this in mind, what's the best Data access layer technique?

    Read the article

  • Select data from three different tables with null data

    - by user3678972
    I am new in Sql. My question is how to get data from three different tables with null values. I have tried a query as below: SELECT * FROM [USER] JOIN [Location] ON ([Location].UserId = [USER].Id) JOIN [ParentChild] ON ([ParentChild].UserId = [USER].Id) WHERE ParentId=7 which I find from this link. Its working fine but, it not fetches all and each data associated with the ParentId Something like it only fetches data which are available in all tables, but also omits some data which not available in Location tables but it comes under the given ParentId. For example: UserId ParentId 1 7 8 7 For userId 8, there is data available in Location table,so it fetches all data. But there is no data for userId 1 available in Location table, so the query didn't work for this. But I want all and every data. If there is no data for userId then it can return only null columns. Is it possible ?? hope everyone can understand my problem.

    Read the article

  • How does Core Data determine if an NSObjects data can be dropped?

    - by Kevin
    In the app I am working on now I was storing about 500 images in Core Data. I have since pulled those images out and store them in the file system now, but in the process I found that the app would crash on the device if I had an array of 500 objects with image data in them. An array with 500 object ids with the image data in those objects worked fine. The 500 objects without the image data also worked fine. I found that I got the best performance with both an array of object ids and image data stored on the filesystem instead of in core data. The conclusion I came to was that if I had an object in an array that told Core Data I was "using" that object and Core Data would hold on to the data. Is this correct?

    Read the article

  • How to handle encryption key conflicts when synchronizing data?

    - by Rafael
    Assume that there is data that gets synchronized between several devices. The data is protected with a symmetric encryption algorithm and a key. The key is stored on each device and encrypted with a password. When a user changes the password only the key gets re-encrypted. Under normal circumstances, when there is a good network connection to other peers, the current key gets synchronized and all data on the new device gets encrypted with the same key. But how to handle situations where a new device doesn’t have a network connection and e.g. creates its own new, but incompatible key? How to keep the usability as high as possible under such circumstances? The application could detect that there is no network and hence refuse to start. That’s very bad usability in my opinion, because the application isn’t functional at all in this case. I don’t consider this a solution. The application could ignore the missing network connection and create a new key. But what to do when the application gains a network connection? There will be several incompatible keys and some parts of the underlying data could only be encrypted with one key and other parts with another key. The situation would get worse if there would be more keys than just two and the application would’ve to ask every time for a password when another object that should get decrypted with another key would be needed. It is very messy and time consuming to try to re-encrypt all data that is encrypted with another key with a main key. What should be the main key at all in this case? The oldest key? The key with the most encrypted objects? What if the key got synchronized but not all objects that got encrypted with this particular key? How should the user know for which particular password the application asks and why it takes probably very long to re-encrypt the data? It’s very hard to describe encryption “issues” to users. So far I didn’t find an acceptable solution, nor some kind of generic strategy. Do you have some hints about a concrete strategy or some books / papers that describe synchronization of symmetrically encrypted data with keys that could cause conflicts?

    Read the article

  • How to switch from Core Data automatic lightweight migration to manual?

    - by Jaanus
    My situation is similar to this question. I am using lightweight migration with the following code, fairly vanilla from Apple docs and other SO threads. It runs upon app startup when initializing the Core Data stack. NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; NSError *error = nil; NSString *storeType = nil; if (USE_SQLITE) { // app configuration storeType = NSSQLiteStoreType; } else { storeType = NSBinaryStoreType; } persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; // the following line sometimes crashes on app startup if (![persistentStoreCoordinator addPersistentStoreWithType:storeType configuration:nil URL:[self persistentStoreURL] options:options error:&error]) { // handle the error } For some users, especially with slower devices, I have crashes confirmed by logs at the indicated line. I understand that a fix is to switch this to manual mapping and migration. What is the recipe to do that? The long way for me would be to go through all Apple docs, but I don't recall there being good examples and tutorials specifically for schema migration.

    Read the article

  • Core data migration failing with "Can't find model for source store" but managedObjectModel for source is present

    - by Ira Cooke
    I have a cocoa application using core-data, which is now at the 4th version of its managed object model. My managed object model contains abstract entities but so far I have managed to get migration working by creating appropriate mapping models and creating my persistent store using addPersistentStoreWithType:configuration:options:error and with the NSMigratePersistentStoresAutomaticallyOption set to YES. NSDictionary *optionsDictionary = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption]; NSURL *url = [NSURL fileURLWithPath: [applicationSupportFolder stringByAppendingPathComponent: @"MyApp.xml"]]; NSError *error=nil; [theCoordinator addPersistentStoreWithType:NSXMLStoreType configuration:nil URL:url options:optionsDictionary error:&error] This works fine when I migrate from model version 3 to 4, which is a migration that involves adding attributes to several entities. Now when I try to add a new model version (version 5), the call to addPersistentStoreWithType returns nil and the error remains empty. The migration from 4 to 5 involves adding a single attribute. I am struggling to debug the problem and have checked all the following; The source database is in fact at version 4 and the persistentStoreCoordinator's managed object model is at version 5. The 4-5 mapping model as well as managed object models for versions 4 and 5 are present in the resources folder of my built application. I've tried various model upgrade paths. Strangely I find that upgrading from an early version 3 - 5 works .. but upgrading from 4 - 5 fails. I've tried adding a custom entity migration policy for migration of the entity whose attributes are changing ... in this case I overrode the method beginEntityMapping:manager:error: . Interestingly this method does get called when migration works (ie when I migrate from 3 to 4, or from 3 to 5 ), but it does not get called in the case that fails ( 4 to 5 ). I'm pretty much at a loss as to where to proceed. Any ideas to help debug this problem would be much appreciated.

    Read the article

  • Bridging the Gap in Cloud, Big Data, and Real-time

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} With all the buzz of around big data and cloud computing, it is easy to overlook one of your most precious commodities—your data. Today’s businesses cannot stand still when it comes to data. Market success now depends on speed, volume, complexity, and keeping pace with the latest data integration breakthroughs. Are you up to speed with big data, cloud integration, real-time analytics? Join us in this three part blog series where we’ll look at each component in more detail. Meet us online on October 24th where we’ll take your questions about what issues you are facing in this brave new world of integration. Let’s start first with Cloud. What happens with your data when you decide to implement a private cloud architecture? Or public cloud? Data integration solutions play a vital role migrating data simply, efficiently, and reliably to the cloud; they are a necessary ingredient of any platform as a service strategy because they support cloud deployments with data-layer application integration between on-premise and cloud environments of all kinds. For private cloud architectures, consolidation of your databases and data stores is an important step to take to be able to receive the full benefits of cloud computing. Private cloud integration requires bidirectional replication between heterogeneous systems to allow you to perform data consolidation without interrupting your business operations. In addition, integrating data requires bulk load and transformation into and out of your private cloud is a crucial step for those companies moving to private cloud. In addition, the need for managing data services as part of SOA/BPM solutions that enable agile application delivery and help build shared data services for organizations. But what about public Cloud? If you have moved your data to a public cloud application, you may also need to connect your on-premise enterprise systems and the cloud environment by moving data in bulk or as real-time transactions across geographies. For public and private cloud architectures both, Oracle offers a complete and extensible set of integration options that span not only data integration but also service and process integration, security, and management. For those companies investing in Oracle Cloud, you can move your data through Oracle SOA Suite using REST APIs to Oracle Messaging Cloud Service —a new service that lets applications deployed in Oracle Cloud securely and reliably communicate over Java Messaging Service . As an example of loading and transforming data into other public clouds, Oracle Data Integrator supports a knowledge module for Salesforce.com—now available on AppExchange. Other third-party knowledge modules are being developed by customers and partners every day. To learn more about how to leverage Oracle’s Data Integration products for Cloud, join us live: Data Integration Breakthroughs Webcast on October 24th 10 AM PST.

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

  • First Day of Data Integration Track at Oracle OpenWorld 2012

    - by Irem Radzik
    OpenWorld started full speed for us today with a great set of sessions in the Data Integration track. After the exciting keynote session on Oracle Database 12c in the morning; Brad Adelberg, VP of Development for Data Integration products, presented Oracle’s data integration product strategy. His session highlighted the new requirements for data integration to achieve pervasive and continuous access to trusted data. The new requirements and product focus areas presented in this session are: Provide access to any data at any source On premise or on cloud Enable zero downtime operations and maximum performance Leverage real-time data for accurate business insights And ensure high quality data is used across the enterprise During the session Brad walked over how Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, Oracle Enterprise Data Quality, and Oracle Data Service Integrator, deliver on these requirements and how recent product releases build on this strategy. Soon after Brad’s session we heard from a panel of Oracle GoldenGate customers, St. Jude Medical, Equifax, and Bank of America, how they achieved zero downtime operations using Oracle GoldenGate. The panel presented different use cases of GoldenGate, from Active-Active replication to offloading reporting. Especially St. Jude Medical’s implementation, which involves the alert management system for patients that use their pacemakers, reminded me in some cases downtime of mission-critical systems can be a matter of life or death. It is very comforting to hear that GoldenGate delivers highly-reliable continuous availability for life-saving medical systems. In the afternoon, Nick Wagner from the Product Management team and I followed the customer panel with the review of Oracle GoldenGate 11gR2’s New Features.  Many questions we received from audience were about GoldenGate’s new Integrated Capture for Oracle Database and the enhanced Conflict Management features, as well as how GoldenGate compares to Oracle Streams. In addition to giving details on GoldenGate’s unique capability to capture changed data with a direct integration to the Oracle DBMS engine, we reminded the audience that enhancements to Oracle GoldenGate will continue, while Streams will be primarily maintained. Last but not least, Tim Garrod and Ryan Fonnett from Raymond James presented a unified real-time data integration solution using Oracle Data Integrator and GoldenGate for their operational data store (ODS). The ODS supports application services across the enterprise and providing timely data is a critical requirement. In this solution, Oracle GoldenGate does the log-based change data capture for Oracle Data Integrator’s near real-time data integration between heterogeneous systems. As Raymond James’ ODS supports mission-critical services for their advisors, the project team had to set up this integration environment to be highly available. During the session, Ryan and Tim explained how they use ODI to enable automated process execution and “always-on” integration processes. Their presentation included 2 demonstrations that focused on CDC patterns deployed with ODI and the automated multi-instance execution and monitoring. We are very grateful to Tim and Ryan for their very-well prepared presentation at OpenWorld this year. Day 2 (Tuesday) will be also a busy day in our track. In addition to the Fusion Middleware Innovation Awards ceremony at 11:45am at Moscone West 3001, we have the following DI sessions Real-World Operational Reporting Customer Panel 11:45am Moscone West- 3005 Oracle Data Integrator Product Update and Future Strategy 1:15pm Moscone West- 3005 High-volume OLTP with Oracle GoldenGate: Best Practices from Comcast 1:15pm Moscone West- 3005 Everything You need to Know about Monitoring Oracle GoldenGate 5pm Moscone West-3005 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On document.

    Read the article

  • ADO.NET Data Services Entity Framework request error when property setter is internal

    - by Jim Straatman
    I receive an error message when exposing an ADO.NET Data Service using an Entity Framework data model that contains an entity (called "Case") with an internal setter on a property. If I modify the setter to be public (using the entity designer), the data services works fine. I don’t need the entity "Case" exposed in the data service, so I tried to limit which entities are exposed using SetEntitySetAccessRule. This didn’t work, and service end point fails with the same error. public static void InitializeService(IDataServiceConfiguration config) { config.SetEntitySetAccessRule("User", EntitySetRights.AllRead); } The error message is reported in a browser when the .svc endpoint is called. It is very generic, and reads “Request Error. The server encountered an error processing the request. See server logs for more details.” Unfortunately, there are no entries in the System and Application event logs. I found this stackoverflow question that shows how to configure tracing on the service. After doing so, the following NullReferenceExceptoin error was reported in the trace log. Does anyone know how to avoid this exception when including an entity with an internal setter? Blockquote 131076 3 0 2 MOTOJIM http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.TraceHandledException.aspx Handling an exception. 685a2910-19-128703978432492675 System.NullReferenceException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.PopulateMetadata() at System.Data.Services.DataService1.CreateProvider(Type dataServiceType, Object dataSourceInstance, DataServiceConfiguration&amp; configuration) at System.Data.Services.DataService1.EnsureProviderAndConfigForRequest() at System.Data.Services.DataService1.ProcessRequestForMessage(Stream messageBody) at SyncInvokeProcessRequestForMessage(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]&amp; outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet) </StackTrace> <ExceptionString>System.NullReferenceException: Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.P

    Read the article

  • How To Easily Back Up Your Gmail and Perform Scheduled Backups With GMVault

    - by Chris Hoffman
    We all know backups are important, but we rarely think about backing up our email. GMVault can automatically back up your Gmail to your computer and even restore the emails to another Gmail account – convenient when switching Gmail addresses. We’ve also covered using Thunderbird to back up your web-based email account, but GMVault has a few advantages, including its integrated restore function and easy integration with the Windows Task Scheduler. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Chargeback and showback...both a 'throw back'

    - by llaszews
    Been getting asked again by customers and partners about chargeback and showback in the cloud so thought I would blog on my response to this question. Charge Back background, information and industry analysis: Cloud computing is all about shared resources. These shared resources are computer servers (including memory and CPU), network devices, hard disk storage, database servers, application servers, cooling, floor space, electricity and more. These resources are shared by departments within a company, or by a number of companies, when resources are hosted in the public or hybrid cloud. Currently, hosting providers that run other companies on their cloud platforms do not have an accurate way to measure the shared computing resources used by a specific user let alone used by a specific customer. Additionally, companies running their own cloud data centers, for private or hybrid clouds, have no way of measure and charging back the departments in the company that are using these shared cloud resources. In both cases, the lack of determine shared resource costs and to charge them back to the company, department or user that is using this resources is limited a clear measure of business benefit and impacting company’s ability to measure the Return on Investment (ROI). An IT chargeback system is an accounting strategy that applies the costs of IT services, hardware or software to the business unit in which they are used. This system contrasts with traditional IT accounting models in which a centralized department bears all of the IT costs in an organization and those costs are treated simply as corporate overhead. Showback involves showing the IT costs to a department or customer but not actually charging them for their IT usage. Showback is a gradual method of introducing chargeback into an enterprise. Most companies implement a show back mechanism before a full chargeback system is put in place. Oracle chargeback product: Oracle Enterprise Manager provides tools for defining detailed Chargeback plans spanning different metrics collected for each type of resources as well as defining Cost Centers for grouping costs across multiple developers. Chargeback plans can use not only usage based costs, but also configuration based costs (e.g. version of the platform) or fixed costs (e.g. flat-rate management fee). Chargeback has rich out of the box reports. Trending reports show how charge and resource consumption varies over time, while Summary reports show the breakdown of charges or usage by different dimensions such as Cost Center or Target Type. These reports help consumers in understanding how their charges relate to their consumption and also assist the IT department with budgeting and planning activities. With BI Publisher, the reports can be made available in a variety of formats such as PDF, HTML, Word, Excel or PowerPoint.

    Read the article

  • Disable XF86Back and XF86Forward (back/forward on Lenovo Thinkpad)

    - by Nicolas Raoul
    My Thinkpad R500 has back/forward keys just about the direction keys, so on Firefox I keep losing entered text, accidentally. How to disable these back/forward keys? Excerpt from xev: KeyPress event, serial 29, synthetic NO, window 0x5e00001, root 0xbc, subw 0x0, time 2375874, (1279,794), root:(1285,881), state 0x10, keycode 166 (keysym 0x1008ff26, XF86Back), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyPress event, serial 32, synthetic NO, window 0x5e00001, root 0xbc, subw 0x0, time 2377115, (1279,794), root:(1285,881), state 0x10, keycode 167 (keysym 0x1008ff27, XF86Forward), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • Rotate an image and get back to its original position - opengles glkit

    - by Manoj
    I need to rotate an image in opengles GLkit and get it back to its original position in GLkit. rotation += 5; _modelViewMatrix = GLKMatrix4Rotate( _modelViewMatrix, GLKMathDegreesToRadians(5), 1, 0, 0); _modelViewMatrix = GLKMatrix4Rotate( _modelViewMatrix, GLKMathDegreesToRadians(rotation), 1,0,0); I need to move it in x axis for certain amount and getting back to its original position from where it started. How should i do it?

    Read the article

  • How Many Back Links Do You Need?

    When you first embark on a link-building campaign, it seems pretty daunting to think about building hundreds or even thousands of links back to your site. If you are a beginner to link building, you may be wondering how many back links are enough?

    Read the article

  • How to Get Quality Back Links

    While doing a search engine optimization of a website, one of the most important things is to get quality back-links for your website. There are several ways of getting quality back-link for your website.

    Read the article

  • Disable XF86Back and XF86Forward (browser back/forward on Lenovo Thinkpad)

    - by Nicolas Raoul
    My Thinkpad R500 has back/forward keys just about the direction keys, so on Firefox I keep losing entered text, accidentally. How to disable these back/forward keys? Excerpt from xev: KeyPress event, serial 29, synthetic NO, window 0x5e00001, root 0xbc, subw 0x0, time 2375874, (1279,794), root:(1285,881), state 0x10, keycode 166 (keysym 0x1008ff26, XF86Back), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyPress event, serial 32, synthetic NO, window 0x5e00001, root 0xbc, subw 0x0, time 2377115, (1279,794), root:(1285,881), state 0x10, keycode 167 (keysym 0x1008ff27, XF86Forward), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • Getting Niche Specific Natural Back Links

    The only way your niche sites can become popular is by getting maximum natural back links. Search engines like it when your page is linked as a reference to some topic. These links have excellent anchor text in them. The websites which link back to your pages are also of the related category. This is an added bonus.

    Read the article

  • Data Recovery needed with a Sector Zero problem on a HDD

    - by Jay Robins
    Left my computer (running XP) copying files to an external drive. Came back a few hours later, and the laptop was pretty hot and had frozen (Ironic aint it). Forced a reboot, and the laptop HDD hasnt worked since. Set it up in an external enclosure, but XP cant mount the HDD to a drive letter, but is able to recognize the manufacturer and drive size (Hitachi-320GB). No noise or rattling when the drive is spinning, but I cant get anything off of it, since i cant mount it, or see much of anything. Computer repair shop ran some software tests and says it came back with a "Zero Sector bad" message, and need to send it to a professional data recovery service. Any other options or ideas, before I have to spend thousands of dollars to recover my data? Any help would be GREATLY appreciated!!! I'm desperate and a poor student! Thanks in Advance, -Jay

    Read the article

  • Recover data from SD card

    - by Paul Tarjan
    I have a 2GB kingston microSD card which is about 3 years old. I put it in a reader today in my Windows Vista computer, wrote a 32MB file onto it, safely removed it, and then tried to read it elsewhere. Nothing. Putting it back in vista it now says You need to format the disk in drive F: before you can use it. What should I do? I have access to many computers and OSes if your recommendations need that. I would be very sad if I lost all the contents of the card. Most of the data is backed up, but there are a few things that aren't. :( Doing a # dd if=/dev/sdg of=~/tmp/sd.bin gives me a 2 gig file, and grepping the file it seems like lots of my data is still there, how can I put it back together?

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • Import exponetial fixed width format data into Excel

    - by Tom Daniel
    I've received a bunch of text data files consiting of Lots of records (30K/file) of 3 fields each of 5-place numbers in exponential format: s0.nnnnnEsee (where s is +/-, n is a digit and ee is the exponent (always 2 digit). When I open the file in Notepad, the format is perfectly uniform throughout each file, but when I import it to Excel using Data|Import|Fixed Width, many of the data values get messed up, no matter what format (text, exponential, various custom tries) I assign to the cells. Looking at the Notepad version, it appears that leading + signs were replaced with a space in the data file, but the sign of the exponential is always there. This means that some fields begin with a space, and this appears to confuse the Excel import routine. I get the same result in Excel 2003 and 2007. I'm sure there's a straightforward solution (hopefully without a messy VBA routine), but I can't figure out what to try next. :-) To clarify (hopefully), here are some input records and the corresponding text input to Excel: Notepad Excel -0.11311E+01 0.10431E-04 0.27018E-03 -0.11311E 1.0431E-05 2.7018E-04 0.19608E+00-0.81414E-02-0.89553E-02 0.19608E -8.1414E-03 8.9553E-03 etc. Whoopee! Solved my own problem - in the spirit of Jeopardy, now that I've begun the question, here's the answer - Use a different "File Origin" - several other than the default "Unicode UTF..." work fine! What a pain. Hope this helps somebody else avoid a few unpleasant hours! Aloha from Kona, Tom

    Read the article

  • Recover data from quick formatted DVD-R

    - by Andrii Kalytiiuk
    I need to recover data from quick-formatted DVD-R. Please advise a free of charge option (cheap commercial tools will be ok either). Disk was partially recorded with Windows built in disk recorder and recording most likely was not complete. Afterwards I have inserted partially recorded DVD again and on Windows recorder's message box 'How to use this disk?' selected - 'use for CD/DVD player' and data was completely lost - as new recording session was started. Files of photos were recorded on disk. What I have tried so far: DiskInternals CD-DVD recovery - sees 5 jpg files but can't show preview. Tool is commercial - trial version does not allow to recover files. CDCheck - doesn't see any files and reports errors at attempt to scand DVD CD Recovery Toolbox Free - does not even recognize DVD drive ISO Buster - recognizes two files - one MP3 file for 99% of recorded size and one ARC file for about 100 KB MiniTool Power Data Recovery - Free Edition - does not see any files on DVD Stellar Phoenix CD DVD Data Recovery - does not see any files BinaryBiz Virtual Lab - sees DVD disk but needs license to browse content Please advise how is it possible to recover files from DVD.

    Read the article

  • Ajax back button jquery - Need full example step by step.

    - by Latios88
    Hi. I need help supporting back button with jquery ajax.load method. I have tried a couple of plugins but i cant get any of them to work. I am loading my content like this: $('.pagination a').click(function(){ var url = $(this).attr('href'); ajaxLoad(url,null,'.container'); return false; }); Note: ajaxLoad function executes the jquery load method. Many thanks for your help.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >