Search Results

Search found 97400 results on 3896 pages for 'application data'.

Page 109/3896 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Which MS technologies would be suited for a data intensive application?

    - by steve.tse
    I'm a junior VB.net developer with little application design knowledge. I've been reading a lot of material online regarding different design patterns, frameworks, and methodologies. It's become a bit confusing for me. Right now I'm trying to decide on what language would be best suited to convert an existing VB6 application (with SQL server backend.) I need to update the UI and add more user functionality and reporting capabilities. Initially I was thinking of using WPF and attempting the MVVM model for this big project. Reports would be generated from SSRS. A peer suggested using ASP.net and I don't have enough experience to determine what would be better. The senior programmers here are stuck on using VB6 and don't have any input on what to use. They are encouraging me to use the latest technologies. This application would be for ~20 users in a central location. Ideally I would stick to a Microsoft .net language. Current interface is similar to a datagrid table where the user would click in to see the detail of each record. They would need to have multiple records open at any given time. I look forward to all the advice I can get. EDIT 2010/04/22 2:47 PM EST What is your audience? Internal clients within an intranet How complex are the interactions you expect to implement? not very... displaying data from SQL server to UI. Allow user updates to said data. Typically just one user modifying a record. Do you require near real-time data updates? no How often do you expect to update the application after the first release? twice/year Do you expect a well-defined set of client platforms? Yes, windows xp environment, potentially upgrading to Win7. Currently in IE.6 moving to IE7 or 8 within a couple of months. Do users need access from anywhere? No, just from their PC.

    Read the article

  • Where to keep config data other than config file (Windows App)?

    - by user144842
    My Windows application GUI is accepting some required application configuration fields from the user. I need to store them of course, but I wanna hide these fields from the user. I cannot use database to store these configs. I want to avoid using app.config either. (No app.config encryption) Any suggestions, Where and in which format i should store fields. (Field example is: Accepting database User credentials, Task Schedule info etc.)

    Read the article

  • JGoodies HashMap

    - by JohnMcClane
    Hi, I'm trying to build a chart program using presentation model. Using JGoodies for data binding was relatively easy for simple types like strings or numbers. But I can't figure out how to use it on a hashmap. I'll try to explain how the chart works and what my problem is: A chart consists of DataSeries, a DataSeries consists of DataPoints. I want to have a data model and to be able to use different views on the same model (e.g. bar chart, pie chart,...). Each of them consists of three classes. For example: DataPointModel: holds the data model (value, label, category) DataPointViewModel: extends JGoodies PresentationModel. wraps around DataPointModel and holds view properties like font and color. DataPoint: abstract class, extends JComponent. Different Views must subclass and implement their own ui. Binding and creating the data model was easy, but i don't know how to bind my data series model. package at.onscreen.chart; import java.beans.PropertyChangeListener; import java.beans.PropertyChangeSupport; import java.beans.PropertyVetoException; import java.util.Collection; import java.util.HashMap; import java.util.Iterator; public class DataSeriesModel { public static String PROPERTY_DATAPOINT = "dataPoint"; public static String PROPERTY_DATAPOINTS = "dataPoints"; public static String PROPERTY_LABEL = "label"; public static String PROPERTY_MAXVALUE = "maxValue"; /** * holds the data points */ private HashMap dataPoints; /** * the label for the data series */ private String label; /** * the maximum data point value */ private Double maxValue; /** * the model supports property change notification */ private PropertyChangeSupport propertyChangeSupport; /** * default constructor */ public DataSeriesModel() { this.maxValue = Double.valueOf(0); this.dataPoints = new HashMap(); this.propertyChangeSupport = new PropertyChangeSupport(this); } /** * constructor * @param label - the series label */ public DataSeriesModel(String label) { this.dataPoints = new HashMap(); this.maxValue = Double.valueOf(0); this.label = label; this.propertyChangeSupport = new PropertyChangeSupport(this); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public DataSeriesModel(String label, DataPoint[] dataPoints) { this.dataPoints = new HashMap(); this.propertyChangeSupport = new PropertyChangeSupport(this); this.maxValue = Double.valueOf(0); this.label = label; for (int i = 0; i < dataPoints.length; i++) { this.addDataPoint(dataPoints[i]); } } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public DataSeriesModel(String label, Collection dataPoints) { this.dataPoints = new HashMap(); this.propertyChangeSupport = new PropertyChangeSupport(this); this.maxValue = Double.valueOf(0); this.label = label; for (Iterator it = dataPoints.iterator(); it.hasNext();) { this.addDataPoint(it.next()); } } /** * adds a new data point to the series. if the series contains a data point with same id, it will be replaced by the new one. * @param dataPoint - the data point */ public void addDataPoint(DataPoint dataPoint) { String category = dataPoint.getCategory(); DataPoint oldDataPoint = this.getDataPoint(category); this.dataPoints.put(category, dataPoint); this.setMaxValue(Math.max(this.maxValue, dataPoint.getValue())); this.propertyChangeSupport.firePropertyChange(PROPERTY_DATAPOINT, oldDataPoint, dataPoint); } /** * returns the data point with given id or null if not found * @param uid - the id of the data point * @return the data point or null if there is no such point in the table */ public DataPoint getDataPoint(String category) { return this.dataPoints.get(category); } /** * removes the data point with given id from the series, if present * @param category - the data point to remove */ public void removeDataPoint(String category) { DataPoint dataPoint = this.getDataPoint(category); this.dataPoints.remove(category); if (dataPoint != null) { if (dataPoint.getValue() == this.getMaxValue()) { Double maxValue = Double.valueOf(0); for (Iterator it = this.iterator(); it.hasNext();) { DataPoint itDataPoint = it.next(); maxValue = Math.max(itDataPoint.getValue(), maxValue); } this.setMaxValue(maxValue); } } this.propertyChangeSupport.firePropertyChange(PROPERTY_DATAPOINT, dataPoint, null); } /** * removes all data points from the series * @throws PropertyVetoException */ public void removeAll() { this.setMaxValue(Double.valueOf(0)); this.dataPoints.clear(); this.propertyChangeSupport.firePropertyChange(PROPERTY_DATAPOINTS, this.getDataPoints(), null); } /** * returns the maximum of all data point values * @return the maximum of all data points */ public Double getMaxValue() { return this.maxValue; } /** * sets the max value * @param maxValue - the max value */ protected void setMaxValue(Double maxValue) { Double oldMaxValue = this.getMaxValue(); this.maxValue = maxValue; this.propertyChangeSupport.firePropertyChange(PROPERTY_MAXVALUE, oldMaxValue, maxValue); } /** * returns true if there is a data point with given category * @param category - the data point category * @return true if there is a data point with given category, otherwise false */ public boolean contains(String category) { return this.dataPoints.containsKey(category); } /** * returns the label for the series * @return the label for the series */ public String getLabel() { return this.label; } /** * returns an iterator over the data points * @return an iterator over the data points */ public Iterator iterator() { return this.dataPoints.values().iterator(); } /** * returns a collection of the data points. the collection supports removal, but does not support adding of data points. * @return a collection of data points */ public Collection getDataPoints() { return this.dataPoints.values(); } /** * returns the number of data points in the series * @return the number of data points */ public int getSize() { return this.dataPoints.size(); } /** * adds a PropertyChangeListener * @param listener - the listener */ public void addPropertyChangeListener(PropertyChangeListener listener) { this.propertyChangeSupport.addPropertyChangeListener(listener); } /** * removes a PropertyChangeListener * @param listener - the listener */ public void removePropertyChangeListener(PropertyChangeListener listener) { this.propertyChangeSupport.removePropertyChangeListener(listener); } } package at.onscreen.chart; import java.beans.PropertyVetoException; import java.util.Collection; import java.util.Iterator; import com.jgoodies.binding.PresentationModel; public class DataSeriesViewModel extends PresentationModel { /** * default constructor */ public DataSeriesViewModel() { super(new DataSeriesModel()); } /** * constructor * @param label - the series label */ public DataSeriesViewModel(String label) { super(new DataSeriesModel(label)); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public DataSeriesViewModel(String label, DataPoint[] dataPoints) { super(new DataSeriesModel(label, dataPoints)); } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public DataSeriesViewModel(String label, Collection dataPoints) { super(new DataSeriesModel(label, dataPoints)); } /** * full constructor * @param model - the data series model */ public DataSeriesViewModel(DataSeriesModel model) { super(model); } /** * adds a data point to the series * @param dataPoint - the data point */ public void addDataPoint(DataPoint dataPoint) { this.getBean().addDataPoint(dataPoint); } /** * returns true if there is a data point with given category * @param category - the data point category * @return true if there is a data point with given category, otherwise false */ public boolean contains(String category) { return this.getBean().contains(category); } /** * returns the data point with given id or null if not found * @param uid - the id of the data point * @return the data point or null if there is no such point in the table */ public DataPoint getDataPoint(String category) { return this.getBean().getDataPoint(category); } /** * returns a collection of the data points. the collection supports removal, but does not support adding of data points. * @return a collection of data points */ public Collection getDataPoints() { return this.getBean().getDataPoints(); } /** * returns the label for the series * @return the label for the series */ public String getLabel() { return this.getBean().getLabel(); } /** * sets the max value * @param maxValue - the max value */ public Double getMaxValue() { return this.getBean().getMaxValue(); } /** * returns the number of data points in the series * @return the number of data points */ public int getSize() { return this.getBean().getSize(); } /** * returns an iterator over the data points * @return an iterator over the data points */ public Iterator iterator() { return this.getBean().iterator(); } /** * removes all data points from the series * @throws PropertyVetoException */ public void removeAll() { this.getBean().removeAll(); } /** * removes the data point with given id from the series, if present * @param category - the data point to remove */ public void removeDataPoint(String category) { this.getBean().removeDataPoint(category); } } package at.onscreen.chart; import java.beans.PropertyChangeEvent; import java.beans.PropertyChangeListener; import java.beans.PropertyVetoException; import java.util.Collection; import java.util.Iterator; import javax.swing.JComponent; public abstract class DataSeries extends JComponent implements PropertyChangeListener { /** * the model */ private DataSeriesViewModel model; /** * default constructor */ public DataSeries() { this.model = new DataSeriesViewModel(); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * constructor * @param label - the series label */ public DataSeries(String label) { this.model = new DataSeriesViewModel(label); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public DataSeries(String label, DataPoint[] dataPoints) { this.model = new DataSeriesViewModel(label, dataPoints); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public DataSeries(String label, Collection dataPoints) { this.model = new DataSeriesViewModel(label, dataPoints); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * full constructor * @param model - the model */ public DataSeries(DataSeriesViewModel model) { this.model = model; this.model.addPropertyChangeListener(this); this.createComponents(); } /** * creates, binds and configures UI components. * data point properties can be created here as components or be painted in paintComponent. */ protected abstract void createComponents(); @Override public void propertyChange(PropertyChangeEvent evt) { this.repaint(); } /** * adds a data point to the series * @param dataPoint - the data point */ public void addDataPoint(DataPoint dataPoint) { this.model.addDataPoint(dataPoint); } /** * returns true if there is a data point with given category * @param category - the data point category * @return true if there is a data point with given category, otherwise false */ public boolean contains(String category) { return this.model.contains(category); } /** * returns the data point with given id or null if not found * @param uid - the id of the data point * @return the data point or null if there is no such point in the table */ public DataPoint getDataPoint(String category) { return this.model.getDataPoint(category); } /** * returns a collection of the data points. the collection supports removal, but does not support adding of data points. * @return a collection of data points */ public Collection getDataPoints() { return this.model.getDataPoints(); } /** * returns the label for the series * @return the label for the series */ public String getLabel() { return this.model.getLabel(); } /** * sets the max value * @param maxValue - the max value */ public Double getMaxValue() { return this.model.getMaxValue(); } /** * returns the number of data points in the series * @return the number of data points */ public int getDataPointCount() { return this.model.getSize(); } /** * returns an iterator over the data points * @return an iterator over the data points */ public Iterator iterator() { return this.model.iterator(); } /** * removes all data points from the series * @throws PropertyVetoException */ public void removeAll() { this.model.removeAll(); } /** * removes the data point with given id from the series, if present * @param category - the data point to remove */ public void removeDataPoint(String category) { this.model.removeDataPoint(category); } /** * returns the data series view model * @return - the data series view model */ public DataSeriesViewModel getViewModel() { return this.model; } /** * returns the data series model * @return - the data series model */ public DataSeriesModel getModel() { return this.model.getBean(); } } package at.onscreen.chart.builder; import java.util.Collection; import net.miginfocom.swing.MigLayout; import at.onscreen.chart.DataPoint; import at.onscreen.chart.DataSeries; import at.onscreen.chart.DataSeriesViewModel; public class BuilderDataSeries extends DataSeries { /** * default constructor */ public BuilderDataSeries() { super(); } /** * constructor * @param label - the series label */ public BuilderDataSeries(String label) { super(label); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public BuilderDataSeries(String label, DataPoint[] dataPoints) { super(label, dataPoints); } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public BuilderDataSeries(String label, Collection dataPoints) { super(label, dataPoints); } /** * full constructor * @param model - the model */ public BuilderDataSeries(DataSeriesViewModel model) { super(model); } @Override protected void createComponents() { this.setLayout(new MigLayout()); /* * * I want to add a new BuilderDataPoint for each data point in the model. * I want the BuilderDataPoints to be synchronized with the model. * e.g. when a data point is removed from the model, the BuilderDataPoint shall be removed * from the BuilderDataSeries * */ } } package at.onscreen.chart.builder; import javax.swing.JFormattedTextField; import javax.swing.JTextField; import at.onscreen.chart.DataPoint; import at.onscreen.chart.DataPointModel; import at.onscreen.chart.DataPointViewModel; import at.onscreen.chart.ValueFormat; import com.jgoodies.binding.adapter.BasicComponentFactory; import com.jgoodies.binding.beans.BeanAdapter; public class BuilderDataPoint extends DataPoint { /** * default constructor */ public BuilderDataPoint() { super(); } /** * constructor * @param category - the category */ public BuilderDataPoint(String category) { super(category); } /** * constructor * @param value - the value * @param label - the label * @param category - the category */ public BuilderDataPoint(Double value, String label, String category) { super(value, label, category); } /** * full constructor * @param model - the model */ public BuilderDataPoint(DataPointViewModel model) { super(model); } @Override protected void createComponents() { BeanAdapter beanAdapter = new BeanAdapter(this.getModel(), true); ValueFormat format = new ValueFormat(); JFormattedTextField value = BasicComponentFactory.createFormattedTextField(beanAdapter.getValueModel(DataPointModel.PROPERTY_VALUE), format); this.add(value, "w 80, growx, wrap"); JTextField label = BasicComponentFactory.createTextField(beanAdapter.getValueModel(DataPointModel.PROPERTY_LABEL)); this.add(label, "growx, wrap"); JTextField category = BasicComponentFactory.createTextField(beanAdapter.getValueModel(DataPointModel.PROPERTY_CATEGORY)); this.add(category, "growx, wrap"); } } To sum it up: I need to know how to bind a hash map property to JComponent.components property. JGoodies is in my opinion not very well documented, I spent a long time searching through the internet, but I did not find any solution to my problem. Hope you can help me.

    Read the article

  • ReportBuilder.application fails on my PC - but works on localhost

    - by JayTee
    We're running SQL 2005 on Win2K3 server and are using SSRS. Here's the situation: I can run Report Builder from localhost My coworker can run Report Builder on his Vista computer Another coworker can run Report Builder on his XP SP3 computer (IE7) I can NOT run Report Builder on my XP SP3 computer (IE7) I'm told that it could be anything from an errant registry entry to a group policy problem. Here is what I've tried: Put the site into "Trusted Sites" with "low" security re-install .NET create a new local user account and attempt to run it The results? Every single time, I get a dialog box: "Application cannot be started. Contact the application vendor" I click the details button and get this: PLATFORM VERSION INFO Windows : 5.1.2600.196608 (Win32NT) Common Language Runtime : 2.0.50727.3607 System.Deployment.dll : 2.0.50727.3053 (netfxsp.050727-3000) mscorwks.dll : 2.0.50727.3607 (GDR.050727-3600) dfdll.dll : 2.0.50727.3053 (netfxsp.050727-3000) dfshim.dll : 2.0.50727.3053 (netfxsp.050727-3000) SOURCES Deployment url : http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application Server : Microsoft-IIS/6.0 X-Powered-By : ASP.NET X-AspNet-Version: 2.0.50727 IDENTITIES Deployment Identity : ReportBuilder.application, Version=9.0.3042.0, Culture=neutral, PublicKeyToken=c3bce3770c238a49, processorArchitecture=msil APPLICATION SUMMARY * Online only application. * Trust url parameter is set. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application resulted in exception. Following failure messages were detected: + Value does not fall within the expected range. COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [4/7/2010 2:53:57 PM] : Activation of http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application has started. * [4/7/2010 2:53:58 PM] : Processing of deployment manifest has successfully completed. ERROR DETAILS Following errors were detected during this operation. * [4/7/2010 2:53:58 PM] System.ArgumentException - Value does not fall within the expected range. - Source: System.Deployment - Stack trace: at System.Deployment.Application.NativeMethods.CorLaunchApplication(UInt32 hostType, String applicationFullName, Int32 manifestPathsCount, String[] manifestPaths, Int32 activationDataCount, String[] activationData, PROCESS_INFORMATION processInformation) at System.Deployment.Application.ComponentStore.ActivateApplication(DefinitionAppId appId, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.SubscriptionStore.ActivateApplication(DefinitionAppId appId, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.ApplicationActivator.Activate(DefinitionAppId appId, AssemblyManifest appManifest, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) COMPONENT STORE TRANSACTION DETAILS * Transaction at [4/7/2010 2:53:58 PM] + System.Deployment.Internal.Isolation.StoreOperationSetDeploymentMetadata - Status: Set - HRESULT: 0x0 + System.Deployment.Internal.Isolation.StoreTransactionOperationType (27) - HRESULT: 0x0 I'm really at a loss. I'm certain there is something on my PC preventing the application from running - but I just don't know what. Google hasn't been much of a help because most problems are related to the server configuration (which I know is correct since it works on other PCs) Help me, Overflow Kenobi, you're my only hope..

    Read the article

  • how to make data that download from google-app-engine readable..

    - by zjm1126
    i use this to download all data from my google app: i follow this article: http://code.google.com/intl/en/appengine/docs/python/tools/uploadingdata.html#Creating_Exporter_Classes and download data use this: bulkloader.py --dump --url=http://zjm1126.appspot.com/remote_api --filename=b.csv but the data is : so how to make the data readable ? thanks

    Read the article

  • Specifying column names from a list in the data.frame command.

    - by MW Frost
    I have a list called cols with column names in it: cols <- c('Column1','Column2','Column3') I'd like to reproduce this command, but with a call to the list: data.frame(Column1=rnorm(10)) Here's what happens when I try it: > data.frame(cols[1]=rnorm(10)) Error: unexpected '=' in "data.frame(I(cols[1])=" The same thing happens if I wrap cols[1] in I() or eval(). How can I feed that item from the vector into the data.frame() command?

    Read the article

  • How do i make form data not disappear after hitting refresh?

    - by acidzombie24
    I went to test my page on another browser. On google chrome i can fill out a form, hit back and forward and still have the data there. Now i need to refresh the page so certain data is correct (such as session id if the cookie expires or user logs out before submitting). I refresh and lose all data. Is there some option i can set so all data is kept?

    Read the article

  • SQL Server – SafePeak “Logon Trigger” Feature for Managing Data Access

    - by pinaldave
    Lately I received an interesting question about the abilities of SafePeak for SQL Server acceleration software: Q: “I would like to use SafePeak to make my CRM application faster. It is an application we bought from some vendor, after a while it became slow and we can’t reprogram it. SafePeak automated caching sounds like an easy and good solution for us. But, in my application there are many servers and different other applications services that address its main database, and some even change data, and I feel that there is a chance that some servers that during the connection process we may miss some. Is there a way to ensure that SafePeak will be aware of all connections to the SQL Server, so its cache will remain intact?” Interesting question, as I remember that SafePeak (http://www.safepeak.com/Product/SafePeak-Overview) likes that all traffic to the database will go thru it. I decided to check out the features of SafePeak latest version (2.1) and seek for an answer there. A: Indeed I found SafePeak has a feature they call “Logon Trigger” and is designed for that purpose. It is located in the user interface, under: Settings -> SQL instances management  ->  [your instance]  ->  [Logon Trigger] tab. From here you activate / deactivate it and control a white-list of enabled server IPs and Login names that SafePeak will ignore them. Click to Enlarge After activation of the “logon trigger” Safepeak server is notified by the SQL Server itself on each new opened connection. Safepeak monitors those connections and decides if there is something to do with them or not. On a typical installation SafePeak likes all application and users connections to go via SafePeak – this way it knows about data and schema updates immediately (real time). With activation of the safepeak “logon trigger”  a special CLR trigger is deployed on the SQL server and notifies Safepeak on any connection that has not arrived via SafePeak. In such cases Safepeak can act to clear and lock the cache or to ignore it. This feature enables to make sure SafePeak will be aware of all connections so SafePeak cache will maintain exactly correct all times. So even if a user, like a DBA will connect to the SQL Server not via SafePeak, SafePeak will know about it and take actions. The notification does not impact the work of that connection, the user or application still continue to do whatever they planned to do. Note: I found that activation of logon trigger in SafePeak requires that SafePeak SQL login will have the next permissions: 1) CONTROL SERVER; 2) VIEW SERVER STATE; 3) And the SQL Server instance is CLR enabled; Seeing SafePeak in action, I can say SafePeak brings fantastic resource for those who seek to get performance for SQL Server critical apps. SafePeak promises to accelerate SQL Server applications in just several hours of installation, automatic learning and some optimization configuration (no code changes!!!). If better application and database performance means better business to you – I suggest you to download and try SafePeak. The solution of SafePeak is indeed unique, and the questions I receive are very interesting. Have any more questions on SafePeak? Please leave your question as a comment and I will try to get an answer for you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • career in Mobile sw/Application Development [closed]

    - by pramod
    i m planning to do a course on Wireless & mobile computing.The syllabus are given below.Please check & let me know whether its worth to do.How is the job prospects after that.I m a fresher & from electronic Engg.The modules are- *Wireless and Mobile Computing (WiMC) – Modules* C, C++ Programming and Data Structures 100 Hours C Revision C, C++ programming tools on linux(Vi editor, gdb etc.) OOP concepts Programming constructs Functions Access Specifiers Classes and Objects Overloading Inheritance Polymorphism Templates Data Structures in C++ Arrays, stacks, Queues, Linked Lists( Singly, Doubly, Circular) Trees, Threaded trees, AVL Trees Graphs, Sorting (bubble, Quick, Heap , Merge) System Development Methodology 18 Hours Software life cycle and various life cycle models Project Management Software: A Process Various Phases in s/w Development Risk Analysis and Management Software Quality Assurance Introduction to Coding Standards Software Project Management Testing Strategies and Tactics Project Management and Introduction to Risk Management Java Programming 110 Hours Data Types, Operators and Language Constructs Classes and Objects, Inner Classes and Inheritance Inheritance Interface and Package Exceptions Threads Java.lang Java.util Java.awt Java.io Java.applet Java.swing XML, XSL, DTD Java n/w programming Introduction to servlet Mobile and Wireless Technologies 30 Hours Basics of Wireless Technologies Cellular Communication: Single cell systems, multi-cell systems, frequency reuse, analog cellular systems, digital cellular systems GSM standard: Mobile Station, BTS, BSC, MSC, SMS sever, call processing and protocols CDMA standard: spread spectrum technologies, 2.5G and 3G Systems: HSCSD, GPRS, W-CDMA/UMTS,3GPP and international roaming, Multimedia services CDMA based cellular mobile communication systems Wireless Personal Area Networks: Bluetooth, IEEE 802.11a/b/g standards Mobile Handset Device Interfacing: Data Cables, IrDA, Bluetooth, Touch- Screen Interfacing Wireless Security, Telemetry Java Wireless Programming and Applications Development(J2ME) 100 Hours J2ME Architecture The CLDC and the KVM Tools and Development Process Classification of CLDC Target Devices CLDC Collections API CLDC Streams Model MIDlets MIDlet Lifecycle MIDP Programming MIDP Event Architecture High-Level Event Handling Low-Level Event Handling The CLDC Streams Model The CLDC Networking Package The MIDP Implementation Introduction to WAP, WML Script and XHTML Introduction to Multimedia Messaging Services (MMS) Symbian Programming 60 Hours Symbian OS basics Symbian OS services Symbian OS organization GUI approaches ROM building Debugging Hardware abstraction Base porting Symbian OS reference design porting File systems Overview of Symbian OS Development – DevKits, CustKits and SDKs CodeWarrior Tool Application & UI Development Client Server Framework ECOM STDLIB in Symbian iPhone Programming 80 Hours Introducing iPhone core specifications Understanding iPhone input and output Designing web pages for the iPhone Capturing iPhone events Introducing the webkit CSS transforms transitions and animations Using iUI for web apps Using Canvas for web apps Building web apps with Dashcode Writing Dashcode programs Debugging iPhone web pages SDK programming for web developers An introduction to object-oriented programming Introducing the iPhone OS Using Xcode and Interface builder Programming with the SDK Toolkit OS Concepts & Linux Programming 60 Hours Operating System Concepts What is an OS? Processes Scheduling & Synchronization Memory management Virtual Memory and Paging Linux Architecture Programming in Linux Linux Shell Programming Writing Device Drivers Configuring and Building GNU Cross-tool chain Configuring and Compiling Linux Virtual File System Porting Linux on Target Hardware WinCE.NET and Database Technology 80 Hours Execution Process in .NET Environment Language Interoperability Assemblies Need of C# Operators Namespaces & Assemblies Arrays Preprocessors Delegates and Events Boxing and Unboxing Regular Expression Collections Multithreading Programming Memory Management Exceptions Handling Win Forms Working with database ASP .NET Server Controls and client-side scripts ASP .NET Web Server Controls Validation Controls Principles of database management Need of RDBMS etc Client/Server Computing RDBMS Technologies Codd’s Rules Data Models Normalization Techniques ER Diagrams Data Flow Diagrams Database recovery & backup SQL Android Application 80 Hours Introduction of android Why develop for android Android SDK features Creating android activities Fundamental android UI design Intents, adapters, dialogs Android Technique for saving data Data base in Androids Maps, Geocoding, Location based services Toast, using alarms, Instant messaging Using blue tooth Using Telephony Introducing sensor manager Managing network and wi-fi connection Advanced androids development Linux kernel security Implement AIDL Interface. Project 120 Hours

    Read the article

  • FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

    - by Rebecca Hansen
    FairWarning® solutions have audited well over 120 billion events, each of which was processed and stored in a MySQL database. FairWarning is the world's leading supplier of privacy monitoring solutions for electronic health records, relied on by over 1,200 Hospitals and 5,000 Clinics to keep their patients' data safe. In January 2014, FairWarning was awarded the highest commendation in healthcare IT as the first ever Category Leader for Patient Privacy Monitoring in the "2013 Best in KLAS: Software & Services" report[1]. FairWarning has used MySQL as their solutions’ database from their start in 2005 to worldwide expansion and market leadership. FairWarning recently migrated their solutions from MyISAM to InnoDB and updated from MySQL 5.5 to 5.6. Following are some of benefits they’ve had as a result of those changes and reasons for their continued reliance on MySQL (from FairWarning MySQL Case Study). Scalability to Handle Terabytes of Data FairWarning's customers have a lot of data: On average, FairWarning customers receive over 700,000 events to be processed daily. Over 25% of their customers receive over 30 million events per day, which equates to over 1 billion events and nearly one terabyte (TB) of new data each month. Databases range in size from a few hundred GBs to 10+ TBs for enterprise deployments (data are rolled off after 13 months). Low or Zero Admin = Few DBAs "MySQL has not required a lot of administration. After it's been tuned, configured, and optimized for size on initial setup, we have very low administrative costs. I can scale and add more customers without adding DBAs. This has had a big, positive impact on our business.” - Chris Arnold, FairWarning Vice President of Product Management and Engineering. Performance Schema  As the size of FairWarning's customers has increased, so have their tables and data volumes. MySQL 5.6’ new maintenance and management features have helped FairWarning keep up. In particular, MySQL 5.6 performance schema’s low-level metrics have provided critical insight into how the system is performing and why. Support for Mutli-CPU Threads MySQL 5.6' support for multiple concurrent CPU threads, and FairWarning's custom data loader allow multiple files to load into a single table simultaneously vs. one at a time. As a result, their data load time has been reduced by 500%. MySQL Enterprise Hot Backup Because hospitals and clinics never stop, FairWarning solutions can’t either. FairWarning changed from using mysqldump to MySQL Enterprise Hot Backup, which has reduced downtime, restore time, and storage requirements. For many of their larger customers, restore time has decreased by 80%. MySQL Enterprise Edition and Product Roadmap Provide Complete Solution "MySQL's product roadmap fully addresses our needs. We like the fact that MySQL Enterprise Edition has everything included; there's no need to purchase separate modules."  - Chris Arnold Learn More>> FairWarning MySQL Case Study Why MySQL 5.6 is an Even Better Embedded Database for Your Products presentation Updating Your Products to MySQL 5.6, Best Practices for OEMs on-demand webinar (audio and / or slides + Q&A transcript) MyISAM to InnoDB – Why and How on-demand webinar (same stuff) Top 10 Reasons to Use MySQL as an Embedded Database white paper [1] 2013 Best in KLAS: Software & Services report, January, 2014. © 2014 KLAS Enterprises, LLC. All rights reserved.

    Read the article

  • Indexing data from multiple tables with Oracle Text

    - by Roger Ford
    It's well known that Oracle Text indexes perform best when all the data to be indexed is combined into a single index. The query select * from mytable where contains (title, 'dog') 0 or contains (body, 'cat') 0 will tend to perform much worse than select * from mytable where contains (text, 'dog WITHIN title OR cat WITHIN body') 0 For this reason, Oracle Text provides the MULTI_COLUMN_DATASTORE which will combine data from multiple columns into a single index. Effectively, it constructs a "virtual document" at indexing time, which might look something like: <title>the big dog</title> <body>the ginger cat smiles</body> This virtual document can be indexed using either AUTO_SECTION_GROUP, or by explicitly defining sections for title and body, allowing the query as expressed above. Note that we've used a column called "text" - this might have been a dummy column added to the table simply to allow us to create an index on it - or we could created the index on either of the "real" columns - title or body. It should be noted that MULTI_COLUMN_DATASTORE doesn't automatically handle updates to columns used by it - if you create the index on the column text, but specify that columns title and body are to be indexed, you will need to arrange triggers such that the text column is updated whenever title or body are altered. That works fine for single tables. But what if we actually want to combine data from multiple tables? In that case there are two approaches which work well: Create a real table which contains a summary of the information, and create the index on that using the MULTI_COLUMN_DATASTORE. This is simple, and effective, but it does use a lot of disk space as the information to be indexed has to be duplicated. Create our own "virtual" documents using the USER_DATASTORE. The user datastore allows us to specify a PL/SQL procedure which will be used to fetch the data to be indexed, returned in a CLOB, or occasionally in a BLOB or VARCHAR2. This PL/SQL procedure is called once for each row in the table to be indexed, and is passed the ROWID value of the current row being indexed. The actual contents of the procedure is entirely up to the owner, but it is normal to fetch data from one or more columns from database tables. In both cases, we still need to take care of updates - making sure that we have all the triggers necessary to update the indexed column (and, in case 1, the summary table) whenever any of the data to be indexed gets changed. I've written full examples of both these techniques, as SQL scripts to be run in the SQL*Plus tool. You will need to run them as a user who has CTXAPP role and CREATE DIRECTORY privilege. Part of the data to be indexed is a Microsoft Word file called "1.doc". You should create this file in Word, preferably containing the single line of text: "test document". This file can be saved anywhere, but the SQL scripts need to be changed so that the "create or replace directory" command refers to the right location. In the example, I've used C:\doc. multi_table_indexing_1.sql : creates a summary table containing all the data, and uses multi_column_datastore Download link / View in browser multi_table_indexing_2.sql : creates "virtual" documents using a procedure as a user_datastore Download link / View in browser

    Read the article

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

  • Second Day of Data Integration Track at OpenWorld 2012

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Our second day at OpenWorld and the Data Integration Team was very active with customer meetings, product updates, product demonstrations, sessions, plus much more.  If the volume of traffic by our demo pods is any indicator, this is a record year for attendance at OpenWorld.  The DIS team have had tremendous number of people stop by our demo pods to learn about the latest product releases or to speak to one of our product managers.    For Oracle GoldenGate, there has been a great deal of interest in Integrated Capture and the  Oracle GoldenGate Monitor plug-in for Enterprise Manager.  Our customer panels this year have been very well attended and on Tuesday we held the “Real World Operational Reporting with Oracle GoldenGate Customer Panel”. On this panel this year we had Michael Wells from Raymond James, Joy Mathew and Venki Govindarajan from Comcast, and Serkan Karatas from Turk Telekom. Our panelists have a great mix of experiences and all are passionate about using Oracle Data Integration products to solve very complex use cases. Each panelist was given a ten minute to overview their use of our product, followed by a barrage of questions from the audience. Michael Wells spoke about using Oracle GoldenGate for heterogeneous real time replication from HP (Tandem) NonStop to SQL Server and emphasized the need for using standard naming conventions for when customers configure GoldenGate, as the practices is immensely helpful when debugging a problem. Joy Mathew and Venkat Govindarajan from Comcast described how they have used GoldenGate for over a decade and their experiences of using the product for replicating data from HP nonstop to Terdata. Serkan Karatas from Turk Telekom dove into using Oracle GoldenGate and the value of archiving data in extremely large databases, which in Turk Telekoms case resulted in a 1 month ROI for the entire project. Thanks again to our panelist and audience participants for making the session interactive and informative.  For Wednesday we have a number of sessions available to attendees plus two hands-on labs, which I have listed below.   If you are unable to attend our hands-on lab for Oracle GoldenGate Veridata, it is available online at youtube.com. Sessions  11:45 AM - 12:45 PM Best Practices for High Availability with Oracle GoldenGate on Oracle Exadata -Moscone South - 102 1:15 PM - 2:15 PM Customer Perspectives: Oracle Data Integrator -Marriott Marquis - Golden Gate C3 Oracle GoldenGate Case Study: Real-Time Operational Reporting Deployment at Oracle -Moscone West - 2003 Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform -Moscone West - 3000 3:30 PM - 4:30 PM Best Practices for Conflict Detection and Resolution in Oracle GoldenGate for Active/Active -Moscone West - 3000 5:00 PM - 6:00 PM Tuning and Troubleshooting Oracle GoldenGate on Oracle Database -Moscone South - 102 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Hands-on Labs 10:15 AM - 11:15 AM Introduction to Oracle GoldenGate Veridata Marriott Marquis - Salon 1/2 11:45 AM - 12:45 PM Oracle Data Integrator and Oracle SOA Suite: Hands-on Lab -Marriott Marquis - Salon 1/2 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On Document.

    Read the article

  • Local LINQtoSQL Database For Your Windows Phone 7 Application

    - by Tim Murphy
    There aren’t many applications that are of value without having some for of data store.  In Windows Phone development we have a few options.  You can store text directly to isolated storage.  You can also use a number of third party libraries to create or mimic databases in isolated storage.  With Mango we gained the ability to have a native .NET database approach which uses LINQ to SQL.  In this article I will try to bring together the components needed to implement this last type of data store and fill in some of the blanks that I think other articles have left out. Defining A Database The first things you are going to need to do is define classes that represent your tables and a data context class that is used as the overall database definition.  The table class consists of column definitions as you would expect.  They can have relationships and constraints as with any relational DBMS.  Below is an example of a table definition. First you will need to add some assembly references to the code file. using System.ComponentModel;using System.Data.Linq;using System.Data.Linq.Mapping; You can then add the table class and its associated columns.  It needs to implement INotifyPropertyChanged and INotifyPropertyChanging.  Each level of the class needs to be decorated with the attribute appropriate for that part of the definition.  Where the class represents the table the properties represent the columns.  In this example you will see that the column is marked as a primary key and not nullable with a an auto generated value. You will also notice that the in the column property’s set method It uses the NotifyPropertyChanging and NotifyPropertyChanged methods in order to make sure that the proper events are fired. [Table]public class MyTable: INotifyPropertyChanged, INotifyPropertyChanging{ public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(string propertyName) { if(PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public event PropertyChangingEventHandler PropertyChanging; private void NotifyPropertyChanging(string propertyName) { if(PropertyChanging != null) { PropertyChanging(this, new PropertyChangingEventArgs(propertyName)); } } private int _TableKey; [Column(IsPrimaryKey = true, IsDbGenerated = true, DbType = "INT NOT NULL Identity", CanBeNull = false, AutoSync = AutoSync.OnInsert)] public int TableKey { get { return _TableKey; } set { NotifyPropertyChanging("TableKey"); _TableKey = value; NotifyPropertyChanged("TableKey"); } } The last part of the database definition that needs to be created is the data context.  This is a simple class that takes an isolated storage location connection string its constructor and then instantiates tables as public properties. public class MyDataContext: DataContext{ public MyDataContext(string connectionString): base(connectionString) { MyRecords = this.GetTable<MyTable>(); } public Table<MyTable> MyRecords;} Creating A New Database Instance Now that we have a database definition it is time to create an instance of the data context within our Windows Phone app.  When your app fires up it should check if the database already exists and create an instance if it does not.  I would suggest that this be part of the constructor of your ViewModel. db = new MyDataContext(connectionString);if(!db.DatabaseExists()){ db.CreateDatabase();} The next thing you have to know is how the connection string for isolated storage should be constructed.  The main sticking point I have found is that the database cannot be created unless the file mode is read/write.  You may have different connection strings but the initial one needs to be similar to the following. string connString = "Data Source = 'isostore:/MyApp.sdf'; File Mode = read write"; Using you database Now that you have done all the up front work it is time to put the database to use.  To make your life a little easier and keep proper separation between your view and your viewmodel you should add a couple of methods to the viewmodel.  These will do the CRUD work of your application.  What you will notice is that the SubmitChanges method is the secret sauce in all of the methods that change data. private myDataContext myDb;private ObservableCollection<MyTable> _viewRecords;public ObservableCollection<MyTable> ViewRecords{ get { return _viewRecords; } set { _viewRecords = value; NotifyPropertyChanged("ViewRecords"); }}public void LoadMedstarDbData(){ var tempItems = from MyTable myRecord in myDb.LocalScans select myRecord; ViewRecords = new ObservableCollection<MyTable>(tempItems);}public void SaveChangesToDb(){ myDb.SubmitChanges();}public void AddMyTableItem(MyTable newScan){ myDb.LocalScans.InsertOnSubmit(newScan); myDb.SubmitChanges();}public void DeleteMyTableItem(MyTable newScan){ myDb.LocalScans.DeleteOnSubmit(newScan); myDb.SubmitChanges();} Updating existing database What happens when you need to change the structure of your database?  Unfortunately you have to add code to your application that checks the version of the database which over time will create some pollution in your codes base.  On the other hand it does give you control of the update.  In this example you will see the DatabaseSchemaUpdater in action.  Assuming we added a “Notes” field to the MyTable structure, the following code will check if the database is the latest version and add the field if it isn’t. if(!myDb.DatabaseExists()){ myDb.CreateDatabase();}else{ DatabaseSchemaUpdater dbUdater = myDb.CreateDatabaseSchemaUpdater(); if(dbUdater.DatabaseSchemaVersion < 2) { dbUdater.AddColumn<MyTable>("Notes"); dbUdater.DatabaseSchemaVersion = 2; dbUdater.Execute(); }} Summary This approach does take a fairly large amount of work, but I think the end product is robust and very native for .NET developers.  It turns out to be worth the investment. del.icio.us Tags: Windows Phone,Windows Phone 7,LINQ to SQL,LINQ,Database,Isolated Storage

    Read the article

  • Application using JOGL stays in Limbo when closing

    - by Roy T.
    I'm writing a game using Java and OpenGL using the JOGL bindings. I noticed that my game doesn't terminate properly when closing the window even though I've set the closing operation of the JFrame to EXIT_ON_CLOSE. I couldn't track down where the problem was so I've made a small reproduction case. Note that on some computers the program terminates normally when closing the window but on other computers (notably my own) something in the JVM keeps lingering, this causes the JFrame to never be disposed and the application to never exit. I haven't found something in common between the computers that had difficulty terminating. All computers had Windows 7, Java 7 and the same version of JOGL and some terminated normally while others had this problem. The test case is as follows: public class App extends JFrame implements GLEventListener { private GLCanvas canvas; @Override public void display(GLAutoDrawable drawable) { GL3 gl = drawable.getGL().getGL3(); gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); gl.glClear(GL3.GL_COLOR_BUFFER_BIT); gl.glFlush(); } // The overrides for dispose (the OpenGL one), init and reshape are empty public App(String title, boolean full_screen, int width, int height) { //snipped setting the width and height of the JFRAME GLProfile profile = GLProfile.get(GLProfile.GL3); GLCapabilities capabilities = new GLCapabilities(profile); canvas = new GLCanvas(capabilities); canvas.addGLEventListener(this); canvas.setSize(getWidth(), getHeight()); add(canvas); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); //!!! setVisible(true); } @Override public void dispose() { System.out.println("HELP"); // } public static void main( String[] args ) { new App("gltut 01", false, 1280, 720); } } As you can see this doesn't do much more than adding a GLCanvas to the frame and registering the main class as the GLEventListener. So what keeps lingering? I'm not sure. I've made some screenshots. The application running normally. The application after the JFrame is closed, note that the JVM still hasn't exited or printed a return code. The application after it was force closed. Note the return code -1, so it wasnt just the JVM standing by or something the application really hadn't exited yet. So what is keeping the application in Limbo? Might it be the circular reference between the GLCanvas and the JFrame? I thought the GC could figure that out. If so how should I deal with that when I want to exit? Is there any other clean-up required when using JOGL? I've tried searching but it doesn't seem to be necessary. Edit, to clarify: there are 2 dispose functions dispose(GLAutoDrawable arg) which is a member of GLEventListener and dispose() which is a member of JFrame. The first one is called correctly (but I wouldn't know what to there, destroying the GLAutoDrawable or the GLCanvas gives an infinite exception loop) the second one is never called.

    Read the article

  • How can I recover data from a dmg that cannot mount?

    - by Benjamin Lee
    I backed up a hard drive in a dmg, then reformatted the hard drive. (I had deleted the efi partition before which was preventing me from reinstalling the operation system). When I tried use the restore function in disk utility it gave an input/output error. I get this error with anything I do to the image including mounting, converting, attaching, verifying, scanning, and getting info through hdiutil imageinfo. I have run all of these with hdiutil and the -noverify, -nomount, and -ignorebadchecksums flags. When I copy the image onto another disk/partition I get a different error: something like "No filesystem". I cannot repair the image with disk utility or asr, which both throw the I/O error. When I put the -verbose flag on the command I actually get a different error: "hdiutil: attach failed - No child processes". I have output from both the -verbose and -debug flags but it is fairly long so I had to attach a link to avoid the 3000 character limit. No recovery system can get the data because it is both compressed and unmountable. How can I get the data back, and what has gone wrong? -debug -verbose

    Read the article

  • Formatting data from management database

    - by bVector
    I've got some data that goes like this: Config_Name Question Answer Cisco WAN Sensitivity: High Cisco WAN Authorized Users: Brent, Charles Cisco WAN Last Audited: n/a Cisco WAN Next Audit: 3/30/2012 Cisco WAN Audit Signature: Cisco WAN Username: MYCOMPANY Cisco WAN Password: Cisco WAN Encrypted-A ENCRYPTED DATA Cisco WAN Encrypted-B Cisco WAN Encrypted-C vCenter server Sensitivity: High vCenter server Authorized Users: Brent, Charles vCenter server Last Audited: vCenter server Next Audit: 3/30/2012 vCenter server Audit Signature: ENCRYPTED DATA vCenter server Username: administrator vCenter server Password: vCenter server Encrypted-A ENCRYPTED DATA vCenter server Encrypted-B vCenter server Encrypted-C AKSC-NE01 IPMI Sensitivity: High AKSC-NE01 IPMI Authorized Users: Brent, Charles AKSC-NE01 IPMI Last Audited: AKSC-NE01 IPMI Next Audit: 3/30/2012 AKSC-NE01 IPMI Audit Signature: ENCRYPTED DATA AKSC-NE01 IPMI Username: MYCOMPANY AKSC-NE01 IPMI Password: AKSC-NE01 IPMI Encrypted-A ENCRYPTED DATA AKSC-NE01 IPMI Encrypted-B AKSC-NE01 IPMI Encrypted-C and I need it to be in this format: Config_Name Sensitivity: Authorized Users: Last Audited: Next Audit: Audit Signature: Username: Password: Encrypted-A Encrypted-B Encrypted-C AKSC-NE01 IPMI High Brent, Charles 3/30/2012 ENCRYPTED DATA MYCOMPANY ENCRYPTED DATA Cisco ASA5505 WAN High Brent, Charles n/a 3/30/2012 ENCRYPTED DATA MYCOMPANY ENCRYPTED DATA vCenter server High Brent, Charles 3/30/2012 ENCRYPTED DATA administrator ENCRYPTED DATA the tabs get messed up on here but hopefully you get my drift. does anyone know an easy way to do this? I haven't found one with excel just yet.

    Read the article

  • File recovery from Mac results in random files and extensions – how do I get my data back?

    - by Robsta
    This Mac hard drive was dying. Someone I knew did a file recovery and got as many files as he could. The program (not sure how it was done, or what program it was) dished out a bunch of folders names such as: DIR56.TOC DIR55.CUR DIR54.GPZ DIR53.GZI … and so forth, all the way down to DIR0.LZH. Some of the file extensions I do understand — like .JPEG, or .MOV — but most of them are ones I've never heard of. I've googled some of them like .TOC, wich stands for "table of contents", but I don't understand how to transfer that data back to the Mac. Currently, they are on a Windows machine. They are being transfered onto an external hard drive that the Mac can read. It can also see all the files. However, the few that I tested to see if the Mac recognizes them (like .TOC and .CUR) cannot be opened. Anyone have any idea as to what I should do? There are some important assignments on there I need to get. EDIT: Data transfer was most likely done by: Easy Recover 6 professional (95% sure, no guarantee)

    Read the article

  • Any ideas out there as to how the data can be recovered from an SSD?

    - by ben
    A friend had some form of catastrophic failure on an HP mini 1000, unbootable. Of course there was data that wasn't backed up. I've removed the SSD and hooked it up to a ZIF 40 enclosure but can not seem to get the drive to be recognized in Windows 7. In Disk Management it displays as present, but uninitialized. Attempting to initialize it presents an error Virtual Disk Manager - "The device is not ready". There is scant information on MIE (the custom OS), so I'm not even sure what kind of file system I'm dealing with. In any case, if the filesystem is indeed some flavor other than FAT or NTFS, is this error consistent with that? Are there any creative ideas out there as to how the data can be recovered? Update: Thanks for all the suggestions! I hadn't even considered running a live cd. Unfortunately no luck with Ubuntu (live cd) or explore2fs. The zif connection seems ok (color coded green led for proper connect, orange for not). The drive can't be initialized and therefore can't be formatted, so I guess there may be some real damage. Probably needs to head to a specialist. Thanks again for the feedback, much appreciated.

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >