Search Results

Search found 60427 results on 2418 pages for 'data transfer'.

Page 94/2418 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • Why is BIND giving me a SERVFAIL in this case? (Notes inside)

    - by imaginative
    Woke up this morning to a bunch of the following: root@foo:/etc/bind# dig @1.2.3.4 foo.example.com ; <<>> DiG 9.6.1-P2 <<>> @1.2.3.4 foo.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 36121 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;;foo.example.com. IN A ;; Query time: 0 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Thu Apr 1 09:57:59 2010 ;; MSG SIZE rcvd: 31 Some background on the fictitious "1.2.3.4". It's a slave name server in my nameserver "farm". Technically I have ns1 (being the master) and ns2/ns3. Currently ns1/ns2 are down for maintenance, so I left ns3 at it serving live traffic. That's the point, DNS is supposed to be resilient. Now the odd part is, "1.2.3.4" was serving requests for example.com just fine for the last 4-5 days. This morning I get a phone call that it's non-responsive. After investigation I see the message you see above, SERVFAIL. I looked into the zone file and saw the following: example.com IN SOA ns1.example.com. hostmaster.mail.example.com. ( I wondered if at this point that the nameserver thought it was not authoritative over example.com and adjusted it to the following: example.com IN SOA ns3.example.com. hostmaster.mail.example.com. ( After that, it started responding again for all authoritative queries for example.com. I have no idea why. I thought these things were supposed to be normalized upon zone transfer from ns1 - ns3? Can someone please example why this happened and how to prevent it from happening in the future? I've never had a similar problem, and because I don't understand it well, I might be missing some critical information in this question. So please let me know if I can further add any detail to make things clearer as well. One more thing to note: I have other domains that I'm authoritative for that have their SOA still saying ns1.example.com. and not ns3.example.com. Those domains are serving requests just fine! Is it a matter of time before they stop also and I have to change SOA to ns3.example.com? Is this also only required because ns1 and ns2 are currently offline?

    Read the article

  • JGoodies HashMap

    - by JohnMcClane
    Hi, I'm trying to build a chart program using presentation model. Using JGoodies for data binding was relatively easy for simple types like strings or numbers. But I can't figure out how to use it on a hashmap. I'll try to explain how the chart works and what my problem is: A chart consists of DataSeries, a DataSeries consists of DataPoints. I want to have a data model and to be able to use different views on the same model (e.g. bar chart, pie chart,...). Each of them consists of three classes. For example: DataPointModel: holds the data model (value, label, category) DataPointViewModel: extends JGoodies PresentationModel. wraps around DataPointModel and holds view properties like font and color. DataPoint: abstract class, extends JComponent. Different Views must subclass and implement their own ui. Binding and creating the data model was easy, but i don't know how to bind my data series model. package at.onscreen.chart; import java.beans.PropertyChangeListener; import java.beans.PropertyChangeSupport; import java.beans.PropertyVetoException; import java.util.Collection; import java.util.HashMap; import java.util.Iterator; public class DataSeriesModel { public static String PROPERTY_DATAPOINT = "dataPoint"; public static String PROPERTY_DATAPOINTS = "dataPoints"; public static String PROPERTY_LABEL = "label"; public static String PROPERTY_MAXVALUE = "maxValue"; /** * holds the data points */ private HashMap dataPoints; /** * the label for the data series */ private String label; /** * the maximum data point value */ private Double maxValue; /** * the model supports property change notification */ private PropertyChangeSupport propertyChangeSupport; /** * default constructor */ public DataSeriesModel() { this.maxValue = Double.valueOf(0); this.dataPoints = new HashMap(); this.propertyChangeSupport = new PropertyChangeSupport(this); } /** * constructor * @param label - the series label */ public DataSeriesModel(String label) { this.dataPoints = new HashMap(); this.maxValue = Double.valueOf(0); this.label = label; this.propertyChangeSupport = new PropertyChangeSupport(this); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public DataSeriesModel(String label, DataPoint[] dataPoints) { this.dataPoints = new HashMap(); this.propertyChangeSupport = new PropertyChangeSupport(this); this.maxValue = Double.valueOf(0); this.label = label; for (int i = 0; i < dataPoints.length; i++) { this.addDataPoint(dataPoints[i]); } } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public DataSeriesModel(String label, Collection dataPoints) { this.dataPoints = new HashMap(); this.propertyChangeSupport = new PropertyChangeSupport(this); this.maxValue = Double.valueOf(0); this.label = label; for (Iterator it = dataPoints.iterator(); it.hasNext();) { this.addDataPoint(it.next()); } } /** * adds a new data point to the series. if the series contains a data point with same id, it will be replaced by the new one. * @param dataPoint - the data point */ public void addDataPoint(DataPoint dataPoint) { String category = dataPoint.getCategory(); DataPoint oldDataPoint = this.getDataPoint(category); this.dataPoints.put(category, dataPoint); this.setMaxValue(Math.max(this.maxValue, dataPoint.getValue())); this.propertyChangeSupport.firePropertyChange(PROPERTY_DATAPOINT, oldDataPoint, dataPoint); } /** * returns the data point with given id or null if not found * @param uid - the id of the data point * @return the data point or null if there is no such point in the table */ public DataPoint getDataPoint(String category) { return this.dataPoints.get(category); } /** * removes the data point with given id from the series, if present * @param category - the data point to remove */ public void removeDataPoint(String category) { DataPoint dataPoint = this.getDataPoint(category); this.dataPoints.remove(category); if (dataPoint != null) { if (dataPoint.getValue() == this.getMaxValue()) { Double maxValue = Double.valueOf(0); for (Iterator it = this.iterator(); it.hasNext();) { DataPoint itDataPoint = it.next(); maxValue = Math.max(itDataPoint.getValue(), maxValue); } this.setMaxValue(maxValue); } } this.propertyChangeSupport.firePropertyChange(PROPERTY_DATAPOINT, dataPoint, null); } /** * removes all data points from the series * @throws PropertyVetoException */ public void removeAll() { this.setMaxValue(Double.valueOf(0)); this.dataPoints.clear(); this.propertyChangeSupport.firePropertyChange(PROPERTY_DATAPOINTS, this.getDataPoints(), null); } /** * returns the maximum of all data point values * @return the maximum of all data points */ public Double getMaxValue() { return this.maxValue; } /** * sets the max value * @param maxValue - the max value */ protected void setMaxValue(Double maxValue) { Double oldMaxValue = this.getMaxValue(); this.maxValue = maxValue; this.propertyChangeSupport.firePropertyChange(PROPERTY_MAXVALUE, oldMaxValue, maxValue); } /** * returns true if there is a data point with given category * @param category - the data point category * @return true if there is a data point with given category, otherwise false */ public boolean contains(String category) { return this.dataPoints.containsKey(category); } /** * returns the label for the series * @return the label for the series */ public String getLabel() { return this.label; } /** * returns an iterator over the data points * @return an iterator over the data points */ public Iterator iterator() { return this.dataPoints.values().iterator(); } /** * returns a collection of the data points. the collection supports removal, but does not support adding of data points. * @return a collection of data points */ public Collection getDataPoints() { return this.dataPoints.values(); } /** * returns the number of data points in the series * @return the number of data points */ public int getSize() { return this.dataPoints.size(); } /** * adds a PropertyChangeListener * @param listener - the listener */ public void addPropertyChangeListener(PropertyChangeListener listener) { this.propertyChangeSupport.addPropertyChangeListener(listener); } /** * removes a PropertyChangeListener * @param listener - the listener */ public void removePropertyChangeListener(PropertyChangeListener listener) { this.propertyChangeSupport.removePropertyChangeListener(listener); } } package at.onscreen.chart; import java.beans.PropertyVetoException; import java.util.Collection; import java.util.Iterator; import com.jgoodies.binding.PresentationModel; public class DataSeriesViewModel extends PresentationModel { /** * default constructor */ public DataSeriesViewModel() { super(new DataSeriesModel()); } /** * constructor * @param label - the series label */ public DataSeriesViewModel(String label) { super(new DataSeriesModel(label)); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public DataSeriesViewModel(String label, DataPoint[] dataPoints) { super(new DataSeriesModel(label, dataPoints)); } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public DataSeriesViewModel(String label, Collection dataPoints) { super(new DataSeriesModel(label, dataPoints)); } /** * full constructor * @param model - the data series model */ public DataSeriesViewModel(DataSeriesModel model) { super(model); } /** * adds a data point to the series * @param dataPoint - the data point */ public void addDataPoint(DataPoint dataPoint) { this.getBean().addDataPoint(dataPoint); } /** * returns true if there is a data point with given category * @param category - the data point category * @return true if there is a data point with given category, otherwise false */ public boolean contains(String category) { return this.getBean().contains(category); } /** * returns the data point with given id or null if not found * @param uid - the id of the data point * @return the data point or null if there is no such point in the table */ public DataPoint getDataPoint(String category) { return this.getBean().getDataPoint(category); } /** * returns a collection of the data points. the collection supports removal, but does not support adding of data points. * @return a collection of data points */ public Collection getDataPoints() { return this.getBean().getDataPoints(); } /** * returns the label for the series * @return the label for the series */ public String getLabel() { return this.getBean().getLabel(); } /** * sets the max value * @param maxValue - the max value */ public Double getMaxValue() { return this.getBean().getMaxValue(); } /** * returns the number of data points in the series * @return the number of data points */ public int getSize() { return this.getBean().getSize(); } /** * returns an iterator over the data points * @return an iterator over the data points */ public Iterator iterator() { return this.getBean().iterator(); } /** * removes all data points from the series * @throws PropertyVetoException */ public void removeAll() { this.getBean().removeAll(); } /** * removes the data point with given id from the series, if present * @param category - the data point to remove */ public void removeDataPoint(String category) { this.getBean().removeDataPoint(category); } } package at.onscreen.chart; import java.beans.PropertyChangeEvent; import java.beans.PropertyChangeListener; import java.beans.PropertyVetoException; import java.util.Collection; import java.util.Iterator; import javax.swing.JComponent; public abstract class DataSeries extends JComponent implements PropertyChangeListener { /** * the model */ private DataSeriesViewModel model; /** * default constructor */ public DataSeries() { this.model = new DataSeriesViewModel(); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * constructor * @param label - the series label */ public DataSeries(String label) { this.model = new DataSeriesViewModel(label); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public DataSeries(String label, DataPoint[] dataPoints) { this.model = new DataSeriesViewModel(label, dataPoints); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public DataSeries(String label, Collection dataPoints) { this.model = new DataSeriesViewModel(label, dataPoints); this.model.addPropertyChangeListener(this); this.createComponents(); } /** * full constructor * @param model - the model */ public DataSeries(DataSeriesViewModel model) { this.model = model; this.model.addPropertyChangeListener(this); this.createComponents(); } /** * creates, binds and configures UI components. * data point properties can be created here as components or be painted in paintComponent. */ protected abstract void createComponents(); @Override public void propertyChange(PropertyChangeEvent evt) { this.repaint(); } /** * adds a data point to the series * @param dataPoint - the data point */ public void addDataPoint(DataPoint dataPoint) { this.model.addDataPoint(dataPoint); } /** * returns true if there is a data point with given category * @param category - the data point category * @return true if there is a data point with given category, otherwise false */ public boolean contains(String category) { return this.model.contains(category); } /** * returns the data point with given id or null if not found * @param uid - the id of the data point * @return the data point or null if there is no such point in the table */ public DataPoint getDataPoint(String category) { return this.model.getDataPoint(category); } /** * returns a collection of the data points. the collection supports removal, but does not support adding of data points. * @return a collection of data points */ public Collection getDataPoints() { return this.model.getDataPoints(); } /** * returns the label for the series * @return the label for the series */ public String getLabel() { return this.model.getLabel(); } /** * sets the max value * @param maxValue - the max value */ public Double getMaxValue() { return this.model.getMaxValue(); } /** * returns the number of data points in the series * @return the number of data points */ public int getDataPointCount() { return this.model.getSize(); } /** * returns an iterator over the data points * @return an iterator over the data points */ public Iterator iterator() { return this.model.iterator(); } /** * removes all data points from the series * @throws PropertyVetoException */ public void removeAll() { this.model.removeAll(); } /** * removes the data point with given id from the series, if present * @param category - the data point to remove */ public void removeDataPoint(String category) { this.model.removeDataPoint(category); } /** * returns the data series view model * @return - the data series view model */ public DataSeriesViewModel getViewModel() { return this.model; } /** * returns the data series model * @return - the data series model */ public DataSeriesModel getModel() { return this.model.getBean(); } } package at.onscreen.chart.builder; import java.util.Collection; import net.miginfocom.swing.MigLayout; import at.onscreen.chart.DataPoint; import at.onscreen.chart.DataSeries; import at.onscreen.chart.DataSeriesViewModel; public class BuilderDataSeries extends DataSeries { /** * default constructor */ public BuilderDataSeries() { super(); } /** * constructor * @param label - the series label */ public BuilderDataSeries(String label) { super(label); } /** * full constructor * @param label - the series label * @param dataPoints - an array of data points */ public BuilderDataSeries(String label, DataPoint[] dataPoints) { super(label, dataPoints); } /** * full constructor * @param label - the series label * @param dataPoints - a collection of data points */ public BuilderDataSeries(String label, Collection dataPoints) { super(label, dataPoints); } /** * full constructor * @param model - the model */ public BuilderDataSeries(DataSeriesViewModel model) { super(model); } @Override protected void createComponents() { this.setLayout(new MigLayout()); /* * * I want to add a new BuilderDataPoint for each data point in the model. * I want the BuilderDataPoints to be synchronized with the model. * e.g. when a data point is removed from the model, the BuilderDataPoint shall be removed * from the BuilderDataSeries * */ } } package at.onscreen.chart.builder; import javax.swing.JFormattedTextField; import javax.swing.JTextField; import at.onscreen.chart.DataPoint; import at.onscreen.chart.DataPointModel; import at.onscreen.chart.DataPointViewModel; import at.onscreen.chart.ValueFormat; import com.jgoodies.binding.adapter.BasicComponentFactory; import com.jgoodies.binding.beans.BeanAdapter; public class BuilderDataPoint extends DataPoint { /** * default constructor */ public BuilderDataPoint() { super(); } /** * constructor * @param category - the category */ public BuilderDataPoint(String category) { super(category); } /** * constructor * @param value - the value * @param label - the label * @param category - the category */ public BuilderDataPoint(Double value, String label, String category) { super(value, label, category); } /** * full constructor * @param model - the model */ public BuilderDataPoint(DataPointViewModel model) { super(model); } @Override protected void createComponents() { BeanAdapter beanAdapter = new BeanAdapter(this.getModel(), true); ValueFormat format = new ValueFormat(); JFormattedTextField value = BasicComponentFactory.createFormattedTextField(beanAdapter.getValueModel(DataPointModel.PROPERTY_VALUE), format); this.add(value, "w 80, growx, wrap"); JTextField label = BasicComponentFactory.createTextField(beanAdapter.getValueModel(DataPointModel.PROPERTY_LABEL)); this.add(label, "growx, wrap"); JTextField category = BasicComponentFactory.createTextField(beanAdapter.getValueModel(DataPointModel.PROPERTY_CATEGORY)); this.add(category, "growx, wrap"); } } To sum it up: I need to know how to bind a hash map property to JComponent.components property. JGoodies is in my opinion not very well documented, I spent a long time searching through the internet, but I did not find any solution to my problem. Hope you can help me.

    Read the article

  • Alternatives to GameKIt for iPhone to iPhone transfer

    - by Mike
    I am needing to transfer information as an NSData object from one iPhone to another inside an application and was originally planning on using Bluetooth/GameKit, but the data looks like it will be in the neighborhood of 500KB - 1MB in size. I don't think this is going to fly with GameKit (Bluetooth) as it will take forever and it seems like people are having all kinds of issues with GK anyway. I'd prefer the users to not have to be on the same WiFi network to make it work but am running out of options and time so I'd probably be happy with whatever works. What is a reasonable alternative? The simpler the setup the better or something with sample code would be greatly appreciated and preferred. Thanks.

    Read the article

  • SSIS- Sharepoint list data transfer issue

    - by Vicky
    Hi , We are trying to transfer data from oracle database (about 60,0000) records only to a sharepoint list using SSIS. But we are getting following error when records reaches around 19000 . The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020 and System.ServiceModel.ProtocolException: The remote server returned an unexpected response: (400) Bad Request. Earlier we thought if could because of Sharepoint list limit so we tried by reducing two of the columns and then it has went fine. So we left with one of the column of Datatype DT_STR and length 400 in oracle beacuse of which issue might be happening, It is mapped to sharepoint custom list field of multiline type. We also verified if length of field is issue but in oracle DB for all records max length for this column is only 239 so length issue is also ruled out. Any one who has faced this kind of issue or knows cause of this issue.Kindly let us know.. Thanks and regards, Vicky

    Read the article

  • Can't transfer list<T> to web service?

    - by iTayb
    I have the same classes on my server and on my web service. I have the following WebMethod: [WebMethod] public int CreateOrder(List<Purchase> p, string username) { o.Add(new Order(p,username)); return o.Count; } However the following code, run at server: protected void CartRepeater_ItemCommand(object source, RepeaterCommandEventArgs e) { List<Purchase> l = ((List<Purchase>)Session["Cart"]); if (e.CommandName == "Order") { localhost.ValidateService WS = new localhost.ValidateService(); WS.CreateOrder(l, Session["username"].ToString()); } } gives the following error: Argument '1': cannot convert from 'System.Collections.Generic.List<Purchase>' to 'localhost.Purchase[]'. How can I transfer the list<Purchase> object to the web service? Thank you very much.

    Read the article

  • Tomcat gzip while chunked issue

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Content-Encoding: gzip Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recieved, but then after ungzipping I see that response is partial. I never seen such behavior with gzip turned off in request headers. So my question is: is it common tomcat issue? maybe one of it's mod which is doing compression? Or maybe it maybe some kind of proxy issue? I can't tell about versions of tomcat or what gzip mod they use, but feel free to ask, i'll try ask my service provider. Thanks.

    Read the article

  • transfer files with ftp with ftpwebrequest

    - by Dejan.S
    Hi. currently I'm working on this ftp transfer. http://msdn.microsoft.com/en-us/library/ms229715.aspx I have setup my server computers iis to have a ftp site on port 21 and transfering files works great. But I want to add ftp to a hosted site I got on the server and it's here where I get the problems with connecting. when I try to connect through the command promt I get unknown host error. I have changed the port and open it up in firewall. and even if I could connect how can I decide what folder I want to upload to?

    Read the article

  • Apache deflate with chucked encoding

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recived, but then after ungzipping I see that response is partial. So my question is: is it common apache issue? maybe one of it's mod_deflate plugins or something? Ask questions if you need more info. Thanks.

    Read the article

  • how to make data that download from google-app-engine readable..

    - by zjm1126
    i use this to download all data from my google app: i follow this article: http://code.google.com/intl/en/appengine/docs/python/tools/uploadingdata.html#Creating_Exporter_Classes and download data use this: bulkloader.py --dump --url=http://zjm1126.appspot.com/remote_api --filename=b.csv but the data is : so how to make the data readable ? thanks

    Read the article

  • Specifying column names from a list in the data.frame command.

    - by MW Frost
    I have a list called cols with column names in it: cols <- c('Column1','Column2','Column3') I'd like to reproduce this command, but with a call to the list: data.frame(Column1=rnorm(10)) Here's what happens when I try it: > data.frame(cols[1]=rnorm(10)) Error: unexpected '=' in "data.frame(I(cols[1])=" The same thing happens if I wrap cols[1] in I() or eval(). How can I feed that item from the vector into the data.frame() command?

    Read the article

  • DatabaseName.bak File Transfer Problem

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm Later when I tried to restore file on server it was restored correctly, but when I tried to transfer same file using FTP Software FileZila and tried to restore that downloaded file it was giving above error. That is file is getting corrupted on the way to tranfer using FTP. Any Idea how can i avoid that file to get courrupt? Note: File is downloaded using FileZila - TransferType as Binary. Thank you.

    Read the article

  • Programatically determining maximum transfer rate

    - by dauphic
    I have a problem that requires me to calculate the maximum upload and download available, then limit my program's usage to a percentage of it. However, I can't think of a good way to find the maximums. At the moment, the only solution I can come up with is transfering a few megabytes between the client and server, then measuring how ling the transfer took. This solution is very undesirable, however, because with 100,000 clients it could potentially result in too much of an increase to our server's bandwidth usage (which is already too high). Does anyone have any solutions to this problem?

    Read the article

  • java best way to transfer images

    - by d.raev
    I have a application that reads a PDF, transform the content to collection of TIF files, and send them to Glass Fish Server for saving. Usually there are 1-5 pages and it works nice, but when I got a input file with 100+ pages... it throws error on the transfer. Java heap space at java.util.Arrays.copyOf(Arrays.java:2786) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94) Putting more resources is not a good option in my case, so I m looking for a way to optimize it somehow. I store the data in: HashMap<TifProfile, List<byte[]> Is there a better way to store or send them ? EDIT I did some tests and the final collections for PDF with 80 pages has size over 280mb (240 tiffs with different settings inside)

    Read the article

  • Asp.Net : IHttpModule + m_context.Server.Transfer = session state error

    - by tinky05
    I have an IHttpModule that implements IRequiresSessionState. The session state is at "on" on the page directive and I also added it to the web.config. In the method "OnBeginRequest" in the IHttpModule, I make a Server.Transfer but I get the error : Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. When I access the page directly or with a Response.Redirect, there is no error. Any idea? Update : The error occur in a page extending System.Web.UI.Page in the method InitializeCulture() which is an override.

    Read the article

  • transfer data from Excel to SQL Server

    - by Jason
    I have an Excel Spreadsheet that contains all my data that I need to put into an SQL Server database. I am fairly new o ASP.NET and have never had to export from Excel to SQL Server before. My Excel spreadsheets looks like this Trade Heading - ArtID - BusinessName - AdStyleCode - Address - Suburb In SQL Server I have created a table named "Listings" which is in this format intListingID - intCategoryID - BusinessName - ArtID - intAdCode -Address - Suburb What would be the best way to export the data from Excel and then import it into SQLServer 2005. Thanks...

    Read the article

  • How to transfer control from javscript to asp page from javascript

    - by junaid ahmed
    I am very new to .net. I searched and googled a lot but couldn't find the solution for my issue. I have a asp .net application, In my login page i only have user id field. When the user submits the button, i need to have a transitional asp page (page which user doesn't see but runs in background) which will run some javascript and transfer the control to either page a or page b based on some condition. How do i achieve this? How do i call a controller action from javscript in mvc architecture in .net

    Read the article

  • Server.Transfer - What could be the issue here?

    - by Younes
    We have implemented a website with the ability for the user to post his actioncode. This then will be checked by the code and when the user has a price the website will Server.Transfer him to another page. The strange thing here is that the user information will be submitted to the database and the actioncode can't be used again. Here we go... I have one user in this database that is inserted twice very fast after the first time he was added. This are the timestamps: 2010-04-23 07:54:41.133 2010-04-23 07:54:41.417 The insert statement is only called once from the code and the user gets Server.Transfered to the Price.aspx page where he sees what price he won. How can it be that this happened? I'm guessing the user hitted F5 but then he had to be very very fast... Thx!

    Read the article

  • How do i make form data not disappear after hitting refresh?

    - by acidzombie24
    I went to test my page on another browser. On google chrome i can fill out a form, hit back and forward and still have the data there. Now i need to refresh the page so certain data is correct (such as session id if the cookie expires or user logs out before submitting). I refresh and lose all data. Is there some option i can set so all data is kept?

    Read the article

  • FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

    - by Rebecca Hansen
    FairWarning® solutions have audited well over 120 billion events, each of which was processed and stored in a MySQL database. FairWarning is the world's leading supplier of privacy monitoring solutions for electronic health records, relied on by over 1,200 Hospitals and 5,000 Clinics to keep their patients' data safe. In January 2014, FairWarning was awarded the highest commendation in healthcare IT as the first ever Category Leader for Patient Privacy Monitoring in the "2013 Best in KLAS: Software & Services" report[1]. FairWarning has used MySQL as their solutions’ database from their start in 2005 to worldwide expansion and market leadership. FairWarning recently migrated their solutions from MyISAM to InnoDB and updated from MySQL 5.5 to 5.6. Following are some of benefits they’ve had as a result of those changes and reasons for their continued reliance on MySQL (from FairWarning MySQL Case Study). Scalability to Handle Terabytes of Data FairWarning's customers have a lot of data: On average, FairWarning customers receive over 700,000 events to be processed daily. Over 25% of their customers receive over 30 million events per day, which equates to over 1 billion events and nearly one terabyte (TB) of new data each month. Databases range in size from a few hundred GBs to 10+ TBs for enterprise deployments (data are rolled off after 13 months). Low or Zero Admin = Few DBAs "MySQL has not required a lot of administration. After it's been tuned, configured, and optimized for size on initial setup, we have very low administrative costs. I can scale and add more customers without adding DBAs. This has had a big, positive impact on our business.” - Chris Arnold, FairWarning Vice President of Product Management and Engineering. Performance Schema  As the size of FairWarning's customers has increased, so have their tables and data volumes. MySQL 5.6’ new maintenance and management features have helped FairWarning keep up. In particular, MySQL 5.6 performance schema’s low-level metrics have provided critical insight into how the system is performing and why. Support for Mutli-CPU Threads MySQL 5.6' support for multiple concurrent CPU threads, and FairWarning's custom data loader allow multiple files to load into a single table simultaneously vs. one at a time. As a result, their data load time has been reduced by 500%. MySQL Enterprise Hot Backup Because hospitals and clinics never stop, FairWarning solutions can’t either. FairWarning changed from using mysqldump to MySQL Enterprise Hot Backup, which has reduced downtime, restore time, and storage requirements. For many of their larger customers, restore time has decreased by 80%. MySQL Enterprise Edition and Product Roadmap Provide Complete Solution "MySQL's product roadmap fully addresses our needs. We like the fact that MySQL Enterprise Edition has everything included; there's no need to purchase separate modules."  - Chris Arnold Learn More>> FairWarning MySQL Case Study Why MySQL 5.6 is an Even Better Embedded Database for Your Products presentation Updating Your Products to MySQL 5.6, Best Practices for OEMs on-demand webinar (audio and / or slides + Q&A transcript) MyISAM to InnoDB – Why and How on-demand webinar (same stuff) Top 10 Reasons to Use MySQL as an Embedded Database white paper [1] 2013 Best in KLAS: Software & Services report, January, 2014. © 2014 KLAS Enterprises, LLC. All rights reserved.

    Read the article

  • Indexing data from multiple tables with Oracle Text

    - by Roger Ford
    It's well known that Oracle Text indexes perform best when all the data to be indexed is combined into a single index. The query select * from mytable where contains (title, 'dog') 0 or contains (body, 'cat') 0 will tend to perform much worse than select * from mytable where contains (text, 'dog WITHIN title OR cat WITHIN body') 0 For this reason, Oracle Text provides the MULTI_COLUMN_DATASTORE which will combine data from multiple columns into a single index. Effectively, it constructs a "virtual document" at indexing time, which might look something like: <title>the big dog</title> <body>the ginger cat smiles</body> This virtual document can be indexed using either AUTO_SECTION_GROUP, or by explicitly defining sections for title and body, allowing the query as expressed above. Note that we've used a column called "text" - this might have been a dummy column added to the table simply to allow us to create an index on it - or we could created the index on either of the "real" columns - title or body. It should be noted that MULTI_COLUMN_DATASTORE doesn't automatically handle updates to columns used by it - if you create the index on the column text, but specify that columns title and body are to be indexed, you will need to arrange triggers such that the text column is updated whenever title or body are altered. That works fine for single tables. But what if we actually want to combine data from multiple tables? In that case there are two approaches which work well: Create a real table which contains a summary of the information, and create the index on that using the MULTI_COLUMN_DATASTORE. This is simple, and effective, but it does use a lot of disk space as the information to be indexed has to be duplicated. Create our own "virtual" documents using the USER_DATASTORE. The user datastore allows us to specify a PL/SQL procedure which will be used to fetch the data to be indexed, returned in a CLOB, or occasionally in a BLOB or VARCHAR2. This PL/SQL procedure is called once for each row in the table to be indexed, and is passed the ROWID value of the current row being indexed. The actual contents of the procedure is entirely up to the owner, but it is normal to fetch data from one or more columns from database tables. In both cases, we still need to take care of updates - making sure that we have all the triggers necessary to update the indexed column (and, in case 1, the summary table) whenever any of the data to be indexed gets changed. I've written full examples of both these techniques, as SQL scripts to be run in the SQL*Plus tool. You will need to run them as a user who has CTXAPP role and CREATE DIRECTORY privilege. Part of the data to be indexed is a Microsoft Word file called "1.doc". You should create this file in Word, preferably containing the single line of text: "test document". This file can be saved anywhere, but the SQL scripts need to be changed so that the "create or replace directory" command refers to the right location. In the example, I've used C:\doc. multi_table_indexing_1.sql : creates a summary table containing all the data, and uses multi_column_datastore Download link / View in browser multi_table_indexing_2.sql : creates "virtual" documents using a procedure as a user_datastore Download link / View in browser

    Read the article

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

  • Second Day of Data Integration Track at OpenWorld 2012

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Our second day at OpenWorld and the Data Integration Team was very active with customer meetings, product updates, product demonstrations, sessions, plus much more.  If the volume of traffic by our demo pods is any indicator, this is a record year for attendance at OpenWorld.  The DIS team have had tremendous number of people stop by our demo pods to learn about the latest product releases or to speak to one of our product managers.    For Oracle GoldenGate, there has been a great deal of interest in Integrated Capture and the  Oracle GoldenGate Monitor plug-in for Enterprise Manager.  Our customer panels this year have been very well attended and on Tuesday we held the “Real World Operational Reporting with Oracle GoldenGate Customer Panel”. On this panel this year we had Michael Wells from Raymond James, Joy Mathew and Venki Govindarajan from Comcast, and Serkan Karatas from Turk Telekom. Our panelists have a great mix of experiences and all are passionate about using Oracle Data Integration products to solve very complex use cases. Each panelist was given a ten minute to overview their use of our product, followed by a barrage of questions from the audience. Michael Wells spoke about using Oracle GoldenGate for heterogeneous real time replication from HP (Tandem) NonStop to SQL Server and emphasized the need for using standard naming conventions for when customers configure GoldenGate, as the practices is immensely helpful when debugging a problem. Joy Mathew and Venkat Govindarajan from Comcast described how they have used GoldenGate for over a decade and their experiences of using the product for replicating data from HP nonstop to Terdata. Serkan Karatas from Turk Telekom dove into using Oracle GoldenGate and the value of archiving data in extremely large databases, which in Turk Telekoms case resulted in a 1 month ROI for the entire project. Thanks again to our panelist and audience participants for making the session interactive and informative.  For Wednesday we have a number of sessions available to attendees plus two hands-on labs, which I have listed below.   If you are unable to attend our hands-on lab for Oracle GoldenGate Veridata, it is available online at youtube.com. Sessions  11:45 AM - 12:45 PM Best Practices for High Availability with Oracle GoldenGate on Oracle Exadata -Moscone South - 102 1:15 PM - 2:15 PM Customer Perspectives: Oracle Data Integrator -Marriott Marquis - Golden Gate C3 Oracle GoldenGate Case Study: Real-Time Operational Reporting Deployment at Oracle -Moscone West - 2003 Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform -Moscone West - 3000 3:30 PM - 4:30 PM Best Practices for Conflict Detection and Resolution in Oracle GoldenGate for Active/Active -Moscone West - 3000 5:00 PM - 6:00 PM Tuning and Troubleshooting Oracle GoldenGate on Oracle Database -Moscone South - 102 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Hands-on Labs 10:15 AM - 11:15 AM Introduction to Oracle GoldenGate Veridata Marriott Marquis - Salon 1/2 11:45 AM - 12:45 PM Oracle Data Integrator and Oracle SOA Suite: Hands-on Lab -Marriott Marquis - Salon 1/2 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On Document.

    Read the article

  • How can I recover data from a dmg that cannot mount?

    - by Benjamin Lee
    I backed up a hard drive in a dmg, then reformatted the hard drive. (I had deleted the efi partition before which was preventing me from reinstalling the operation system). When I tried use the restore function in disk utility it gave an input/output error. I get this error with anything I do to the image including mounting, converting, attaching, verifying, scanning, and getting info through hdiutil imageinfo. I have run all of these with hdiutil and the -noverify, -nomount, and -ignorebadchecksums flags. When I copy the image onto another disk/partition I get a different error: something like "No filesystem". I cannot repair the image with disk utility or asr, which both throw the I/O error. When I put the -verbose flag on the command I actually get a different error: "hdiutil: attach failed - No child processes". I have output from both the -verbose and -debug flags but it is fairly long so I had to attach a link to avoid the 3000 character limit. No recovery system can get the data because it is both compressed and unmountable. How can I get the data back, and what has gone wrong? -debug -verbose

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >