Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 492/2416 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • Graph Theory: How to compute closeness centrality for each node in a set of data?

    - by Jordan
    I'd like to learn how to apply network theory to my own cache of relational data. I'm trying to build a demo of a new way of browsing a music library, using network theory, that I think would make for a very intuitive and useful way of finding the right song at any given time. I have all the data (artists as nodes, similarity from 0 to 1 between each artist and those it is related to) and I can already program, but I don't know how to actually calculate the centrality of a node from that. I've spent a while trying to email different professors at my school but no one seems to know where I can learn this. I hope someone's done something similar. Thanks in advance you guys! ~Jordan

    Read the article

  • How does TransactionScope guarantee data integrity across multiple databases?

    - by Bas Smit
    Hey guys, Can someone tell me the principle of how TransactionScope guarantees data integrity across multiple databases? I imagine it first sends the commands to the databases and then waits for the databases to respond before sending them a message to apply the command sent earlier. However when execution is stopped abruptly when sending those apply messages we could still end up with a database that has applied the command and one that has not. Can anyone shed some light on this? Edit: I guess what Im asking is can I rely on TransactionScope to guarantee data integrity when writing to multiple databases in case of a power outage or a sudden shutdown. Thanks, Bas Example: using(var scope=new TransactionScope()) { using (var context = new FirstEntities()) { context.AddToSomethingSet(new Something()); context.SaveChanges(); } using (var context = new SecondEntities()) { context.AddToSomethingElseSet(new SomethingElse()); context.SaveChanges(); } scope.Complete(); }

    Read the article

  • Safari/Chrome problem with ajaxsubmit?

    - by Jan
    Hi I'm currently having some weird issues with ajaxsubmit (http://jquery.malsup.com/form/#ajaxSubmit), which I'm currently using in a project. I have a flow where I need to open a form into a modal window. I'm using fancybox for that and it works like a charm. When the form has been forced to open in the fancybox window there can happen two things. 1) If the user who is about to submit the form is logged in she should see a confirmation in the modal box, that her input was succesfully submitted 2)If the user is not logged in there should be loaded a login form once she hits the submit button 2.1) When the user has logged in she should receive a confirmation in the modal box. This is also working like a charm in Firefox, IE8 and IE7 but not in Safari or Chrome. The weird part is that it seems like safari and chrome are completely ignoring my ajaxsubmit form. To force the first form to be opened I use the follwoing script - this part is working in both Safari and Chrome. $(".klikEnPrisForm").ajaxForm({ success: function(data){ $.fancybox({'content':data}); } }); My ajaxsubmit form scrip looks like this var options = { url: '/?altTemplate=XmlProxyKlikEnPris', dataType: 'xml', data: $(this).serializeArray(), success: function(data) { if ($(data).find('loggetind').text() == 'true') { $("#klikenpris").hide(); $('<div id="fancybox-inner-klik"></div>').appendTo('#fancybox-inner'); $('#fancybox-inner-klik').load('/KlikEnPrisAccept?tilKvittering=1&sagsno=' + $(data).find('sagsnummer').text() + '&pris=' + $(data).find('pris').text() + '&klik-comment=' + $(data).find('kommentar').text() + '&klik-telefon=' + $(data).find('tlf').text() + '&klik-maeglerkontakt=' + $(data).find('maakontakte').text()).stop(true, true); } else { $("#klikenpris").hide(); $("#fancybox-wrap").css({ 'width': '480px', 'height': '220px' }); $("#fancybox-inner").css({ 'width': '460px', 'height': '220px' }); $('<div id="fancybox-inner-klik"></div>').appendTo('#fancybox-inner'); $('#fancybox-inner-klik').load('/login.aspx?loginklikpris=0&klikpris=1&sagsno=' + $(data).find('sagsnummer').text() + '&pris=' + $(data).find('pris').text() + '&klik-comment=' + $(data).find('kommentar').text() + '&klik-telefon=' + $(data).find('tlf').text() + '&klik-maeglerkontakt=' + $(data).find('maakontakte').text()).stop(true, true); } } }; // bind to the form's submit event $('#klikenprisform').submit(function() { // inside event callbacks 'this' is the DOM element so we first // wrap it in a jQuery object and then invoke ajaxSubmit $(this).ajaxSubmit(options); // !!! Important !!! // always return false to prevent standard browser submit and page navigation return false; }); I have tried inserting an alert in the succes callback function but it's never being called it seems. It seems like the default action is not being overruled by the link written in the "url" in ajaxsubmit. I'm really puzzled about this, since it's working nicely in other browsers and I'm completely lost on how I should approach the debugging in safari/chrome. I hope all the above makes sense and I'm looking forward to hear any suggestions. Cheers!

    Read the article

  • Demystified - BI in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Frequently, my clients ask me if there is a good guide on deciphering the seemingly daunting choice of products from Microsoft when it comes to business intelligence offerings in a SharePoint 2010 world. These are all described in detail in my book, but here is a one (well maybe two) page executive overview. Microsoft Excel: Yes, Microsoft Excel! Your favorite and most commonly used in the world database. No it isn’t a database in technical pure definitions, but this is the most commonly used ‘database’ in the world. You will find many business users craft up very compelling excel sheets with tonnes of logic inside them. Good for: Quick Ad-Hoc reports. Excel 64 bit allows the possibility of very large datasheets (Also see 32 bit vs 64 bit Office, and PowerPivot Add-In below). Audience: End business user can build such solutions. Related technologies: PowerPivot, Excel Services Microsoft Excel with PowerPivot Add-In: The powerpivot add-in is an extension to Excel that adds support for large-scale data. Think of this as Excel with the ability to deal with very large amounts of data. It has an in-memory data store as an option for Analysis services. Good for: Ad-hoc reporting and logic with very large amounts of data. Audience: End business user can build such solutions. Related technologies: Excel, and Excel Services Excel Services: Excel Services is a Microsoft SharePoint Server 2010 shared service that brings the power of Excel to SharePoint Server by providing server-side calculation and browser-based rendering of Excel workbooks. Thus, excel sheets can be created by end users, and published to SharePoint server – which are then rendered right through the browser in read-only or parameterized-read-only modes. They can also be accessed by other software via SOAP or REST based APIs. Good for: Sharing excel sheets with a larger number of people, while maintaining control/version control etc. Sharing logic embedded in excel sheets with other software across the organization via REST/SOAP interfaces Audience: End business users can build such solutions once your tech staff has setup excel services on a SharePoint server instance. Programmers can write software consuming functionality/complex formulae contained in your sheets. Related technologies: PerformancePoint Services, Excel, and PowerPivot. Visio Services: Visio Services is a shared service on the Microsoft SharePoint Server 2010 platform that allows users to share and view Visio diagrams that may or may not have data connected to them. Connected data can update these diagrams allowing a visual/graphical view into the data. The diagrams are viewable through the browser. They are rendered in silverlight, but will automatically down-convert to .png formats. Good for: Showing data as diagrams, live updating. Comes with a developer story. Audience: End business users can build such solutions once your tech staff has setup visio services on a SharePoint server instance. Developers can enhance the visualizations Related Technologies: Visio Services can be used to render workflow visualizations in SP2010 Reporting Services: SQL Server reporting services can integrate with SharePoint, allowing you to store reports and data sources in SharePoint document libraries, and render these reports and associated functionality such as subscriptions through a SharePoint site. In SharePoint 2010, you can also write reports against SharePoint lists (access services uses this technique). Good for: Showing complex reports running in a industry standard data store, such as SQL server. Audience: This is definitely developer land. Don’t expect end users to craft up reports, unless a report model has previously been published. Related Technologies: PerformancePoint Services PerformancePoint Services: PerformancePoint Services in SharePoint 2010 is now fully integrated with SharePoint, and comes with features that can either be used in the BI center site definition, or on their own as activated features in existing site collections. PerformancePoint services allows you to build reports and dashboards that target a variety of back-end datasources including: SQL Server reporting services, SQL Server analysis services, SharePoint lists, excel services, simple tables, etc. Using these you have the ability to create dashboards, scorecards/kpis, and simple reports. You can also create reports targeting hierarchical multidimensional data sources. The visual decomposition tree is a new report type that lets you quickly breakdown multi-dimensional data. Good for: Mostly everything :), except your wallet – it’s not free! But this is the most comprehensive offering. If you have SharePoint server, forget everything and go with performance point. Audience: Developers need to setup the back-end sources, manageability story. DBAs need to setup datawarehouses with cubes. Moderately sophisticated business users, or developers can craft up reports using dashboard designer which is a click-once App that deploys with PerformancePoint Related Technologies: Excel services, reporting services, etc.   Other relevant technologies to know about: Business Connectivity Services: Allows for consumption of external data in SharePoint as columns or external lists. This can be paired with one or more of the above BI offerings allowing insight into such data. Access Services: Allows the representation/publishing of an access database as a SharePoint 2010 site, leveraging many SharePoint features. Reporting services is used by Access services. Secure Store Service: The SP2010 Secure store service is a replacement for the SP2007 single sign on feature. This acts as a credential policeman providing credentials to various applications running with SharePoint. BCS, PerformancePoint Services, Excel Services, and many other apps use the SSS (Secure Store Service) for credential control. Comment on the article ....

    Read the article

  • how do i load a csv file in rails from a migrate usiing load data local infile ?

    - by Chris Drappier
    Hi All, I have my csv file in my public folder, and i'm trying to load it from a migration, but I get a file not found error using this script : ActiveRecord::Base.connection.execute( "load data local infile '#{RAILS_ROOT}/public/muds_variables.csv' into table muds_variables " + "fields terminated by ',' " + "lines terminated by '\n' " + "(variable_name, definition)") I've checked and re-checked the file path, and that's definitely where it lives, I've also tried it just using the file name without any of the path, and a few other combos, but I can't make it work :(. can anyone help me out with this? here's the error : Mysql::Error: File '/home/chris/rails_projects/muds/public/muds_variables.csv' not found (Errcode: 2): load data local infile '/home/chris/rails_projects/muds/public/muds_variables.csv' into table muds_variables fields terminated by ',' lines terminated by ' ' (variable_name, definition) -C

    Read the article

  • In WPF how to define a Data template in case of enum?

    - by Ashish Ashu
    I have a Enum defined as Type public Enum Type { OneType, TwoType, ThreeType }; Now I bind Type to a drop down Ribbon Control Drop Down Menu in a Ribbon Control that displays each menu with a MenuName with corresponding Image. ( I am using Syncfusion Ribbon Control ). I want that each enum type like ( OneType ) has data template defined that has Name of the menu and corrospending image. How can I define the data template of enum ? Please suggest me the solution, if this is possible !! Please also tell me if its not possible or I am thinking in the wrong direction !!

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • node.js / socket.io, cookies only working locally

    - by Ben Griffiths
    I'm trying to use cookie based sessions, however it'll only work on the local machine, not over the network. If I remove the session related stuff, it will however work just great over the network... You'll have to forgive the lack of quality code here, I'm just starting out with node/socket etc etc, and finding any clear guides is tough going, so I'm in n00b territory right now. Basically this is so far hacked together from various snippets with about 10% understanding of what I'm actually doing... The error I see in Chrome is: socket.io.js:1632GET http://192.168.0.6:8080/socket.io/1/?t=1334431940273 500 (Internal Server Error) Socket.handshake ------- socket.io.js:1632 Socket.connect ------- socket.io.js:1671 Socket ------- socket.io.js:1530 io.connect ------- socket.io.js:91 (anonymous function) ------- /socket-test/:9 jQuery.extend.ready ------- jquery.js:438 And in the console for the server I see: debug - served static content /socket.io.js debug - authorized warn - handshake error No cookie My server is: var express = require('express') , app = express.createServer() , io = require('socket.io').listen(app) , connect = require('express/node_modules/connect') , parseCookie = connect.utils.parseCookie , RedisStore = require('connect-redis')(express) , sessionStore = new RedisStore(); app.listen(8080, '192.168.0.6'); app.configure(function() { app.use(express.cookieParser()); app.use(express.session( { secret: 'YOURSOOPERSEKRITKEY', store: sessionStore })); }); io.configure(function() { io.set('authorization', function(data, callback) { if(data.headers.cookie) { var cookie = parseCookie(data.headers.cookie); sessionStore.get(cookie['connect.sid'], function(err, session) { if(err || !session) { callback('Error', false); } else { data.session = session; callback(null, true); } }); } else { callback('No cookie', false); } }); }); var users_count = 0; io.sockets.on('connection', function (socket) { console.log('New Connection'); var session = socket.handshake.session; ++users_count; io.sockets.emit('users_count', users_count); socket.on('something', function(data) { io.sockets.emit('doing_something', data['data']); }); socket.on('disconnect', function() { --users_count; io.sockets.emit('users_count', users_count); }); }); My page JS is: jQuery(function($){ var socket = io.connect('http://192.168.0.6', { port: 8080 } ); socket.on('users_count', function(data) { $('#client_count').text(data); }); socket.on('doing_something', function(data) { if(data == '') { window.setTimeout(function() { $('#target').text(data); }, 3000); } else { $('#target').text(data); } }); $('#textbox').keydown(function() { socket.emit('something', { data: 'typing' }); }); $('#textbox').keyup(function() { socket.emit('something', { data: '' }); }); });

    Read the article

  • JDBC connections: How to specify the port for data-transfer?

    - by LeO
    I wanto to run my JDBC-connection (either Oracle or MSSQL) through a proxy-server. Reason for this is to have additional controls of the traffic, especially for developing. I know, I could specify the proxy, which runs on my machine, and the port in the connection-string. But the specified connection-settings are only taken as some kind of handshake to agree on which port the data is finally transferred. And this is defenitly not the port which I have under proxy-control. So, does anybody have an idea, how to specify the port for the data-transfer? I would prefer if this could be done in the connection-string. The same issue applies for Oracle and MSSQL. Thx LeO

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • How to send EML data in chuck to Google Apps Mail using Google API ver 2 ?

    - by Preeti
    Hi, I am migrating EML mails to Google Apps. When i try to Migrate a EML file with two attachment 2.1 MB and 1.96 MB. It is throwing exception: "The request was aborted: The request was canceled." I am using below code: MailItemEntry[] entries = new MailItemEntry[1]; String msg = File.ReadAllText(EmlPath); entries[0] = new MailItemEntry(); entries[0].Rfc822Msg = new Rfc822MsgElement(msg); ........ MailItemFeed feed = mailItemService.Batch(domain, UserName, entries); I think sending data can resolve this issue.So,how can send this EML data in chunk to Google Apps? Thanx

    Read the article

  • BrowserField or CustomControls? What is the best to use when submitting and fetching data from web?

    - by SIA
    Hi Everybody, I am unable to decide whether what to use for my blackberry application. I am developing an application for Blackberry Device. This application send and recieves data from website. Thats the only functionality. I wanted to know what the best approach to go with. Shall i use BrowserField and display html in the application?? OR Shall i develop the custom controls and update the UI with the data fetched from the web?? Please Suggest, advice. thanks SIA

    Read the article

  • How to get a clipboard paste notification and provide my own data?

    - by Uwe Keim
    For a small utility I am writing (.NET, C#), I want to monitor clipboard copy operations and clipboard paste operations. My idea is to provide my own data when pasting into an arbitrary application. The monitoring of a copy operation can be easily done by using a clipboard viewer. Something that seems much more advanced to me is to write a "clipboard paste provider": Answer to "what formats are available" queries of applications. Supply data to application paste operations. I found this posting and this posting, but none of them seems to really help me. What I guess is that I somehow have to mimic/hijack the current clipboard. Question: Is it possible to "wrap" the clipboard in terms of paste operations and provide my own kind of "clipboard proxy"? Thanks Uwe

    Read the article

  • Site to Site VPN problem, connection succesful data only oneway?

    - by Charles
    To start things off, I'm not the actual Administrator for the VPN Server, but he is also at a loss so I thought I'd ask it here. I know it's a Cisco ASA Firewall/VPN. I have a router that connects to the Cisco VPN server, it does so succesfully. I can ping everything within the remote network and from the remote network into my own. I've been able to SSH into a remote server over VPN as well, it all seems to work; until there's some more data returned. A quick example would be an internal webserver. The default homepage simply redirects, so only sends back HTTP headers with a "Location:". I receive this on my computer, but when I request the actual page then (which isn't that big) I don't get a response at all - it just stalls. And it does this for other services as well, for example SSH. I can do a couple of things while connected, but if there's more than xx output it seems to do nothing. The connection remains active throughout all of this. Has anyone ever experienced anything like this before / know what the problem might be? Another user who has a site-to-site connection with this VPN using the -exact same setup- has no problems, the only difference is that I have around 200ms ping to the VPN server/network because of a very long distance (other continent).

    Read the article

  • Does this schema sound better suited for a document-oriented data store or relational?

    - by Blaine LaFreniere
    Disclaimer: let me know if this question is better suited for serverfault.com I want to store information on music, specifically: genres artists albums songs This information will be used in a web application, and I want people to be able to see all of the songs associated to an album, and albums associated to an artist, and artists associated to a genre. I'm currently using MySQL, but before I make a decision to switch I want to know: How easy is scaling horizontally? Is it easier to manage than an SQL based solution? Would the above data I want to store be too hard to do schema-free? When I think association, I immediately think RDBMSs; can data be stored in something like CouchDB but still have some kind of association as stated above?

    Read the article

  • Why isn't "String or Binary data would be truncated" a more descriptive error?

    - by rwmnau
    To start: I understand what this error means - I'm not attempting to resolve an instance of it. This error is notoriously difficult to troubleshoot, because if you get it inserting a million rows into a table 100 columns wide, there's virtually no way to determine what column of what row is causing the error - you have to modify your process to insert one row at a time, and then see which one fails. That's a pain, to put it mildly. Is there any reason that the error doesn't look more like this? String or Binary data would be truncated Error inserting value "Some 18 char value" into SomeTable.SomeColumn VARCHAR(10) That would make it a lot easier to find and correct the value, if not the table structure itself. If seeing the table data is a security concern, then maybe something generic, like giving the length of the attempted value and the name of the failing column?

    Read the article

  • What is the best way to post data from web browser to server?

    - by Kronass
    Hi, I want to know what is the best way to send data from web browser to server using post method. I've seen a practice where they wrap all the elements data in XML, convert it into Base64 string and then post it to the server (via Ajax or hidden field). this way will not work if the Javascript is disabled, any how if I ignored this. is it a good practice to wrap elements into XML (or create my custom wrapper in general) and post them to server saying it will enhance the maintainability of the code or just stick with the classical way and no need to add unnecessary text in the post.

    Read the article

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

  • Any suggestions on data syncronization from a server to a desktop?

    - by Jamey McElveen
    I have a web based CRM system that stores all the client data from all the clients into one database (MS Sql Server). We need to build a system that maintains a local copy of the data for each client. So basically the client database will have all tables and columns except for the ClientId that is in ever server table. I am aware that I will need to add fields to the server to support synchronization. Are there any good solutions or components already out there to help me accomplish this? We are using MS Sql Server and .NET C#

    Read the article

  • How do I allow inline images with data urls on .NET 4 without triggering request validation?

    - by Johan Driessen
    I'm using the jQuery jstree plugin (http://jstree.com) in a ASP.NET MVC 2 project on .NET 4 RC. It comes with some stylesheets with inline images with data urls, like this: .tree-checkbox ul { background-image:url(data:image/gif;base64,R0lGODlhAgACAIAAAB4dGf///yH5BAEAAAEALAAAAAACAAIAAAICRF4AOw==); } Now, the url for the background image contains a colon, which .NET 4 thinks is an unsafe character, so I get this error message: A potentially dangerous Request.Path value was detected from the client (:). According to the documentation, I am supposed to be able to prevent this by adding <pages validateRequest="false" /> to my Web.config, but that doesn't seem to help. I have tried adding it to the main Web.config for the application, as well as to a special Web.config in the /config folder, but to no avail. Is there any way to get .NET to allow this?

    Read the article

  • Limit the amount of data that can be stored in a folder on Ubuntu Server 12.04?

    - by dougoftheabaci
    I'm in the process of building my first server. It's up, it's running, I'm transferring copious amounts of data away from my horrid little Drobo (DO NOT BUY ONE OF THESE, EVER). However, there's one thing I have yet to do: I'd like to set it up for Time Machine backups as well. I've seen all the guides and I have some idea of how to set the whole thing up, but the issue is that Time Machine will just fill up as much space as you let it. So if I let it lose in my 8 TB zpool it'll slowly consume every last available sector. This, of course, is not acceptable. I have a folder at the root of my zpool called "ZFS Time Machine" and I would like to limit it to 1 TB (all I need for backup purposes). However, I have no idea how to do that. Is this possible? I can continue using a small external hard drive attached via FW800 if I have to but I'd much rather prefer putting everything on my server.

    Read the article

  • How do I persist form data across an "access denied" page in Drupal?

    - by Michael T. Smith
    We're building a small sub-site that, on the front page, has a one input box form that users can submit. From there, they're taken to a page with a few more associated form fields (add more details, tag it, etc.) with the first main form field already filled in. This works splendidly, thus far. The problem comes for users that are not logged in. When they submit that first form, they're taken to a (LoginToboggan based) login page that allows them to login. After they login, they redirect to the second form page, but the first main form field isn't filled in -- in other words, the form data didn't persist. How can we store that data and have it persist across the access denied page?

    Read the article

  • How to finish a broken data upload to the production Google App Engine server?

    - by WooYek
    I was uploading the data to App Engine (not dev server) through loader class and remote api, and I hit the quota in the middle of a CSV file. Based on logs and progress sqllite db, how can I select remaining portion of data to be uploaded? Going through tens of records to determine which was and which was not transfered, is not appealing task, so I look for some way to limit the number of record I need to check. Here's relevant (IMO) log portion, how to interpret work item numbers? [DEBUG 2010-03-30 03:22:51,757 bulkloader.py] [Thread-2] [1041-1050] Transferred 10 entities in 3.9 seconds [DEBUG 2010-03-30 03:22:51,757 adaptive_thread_pool.py] [Thread-2] Got work item [1071-1080] <cut> [DEBUG 2010-03-30 03:23:09,194 bulkloader.py] [Thread-1] [1141-1150] Transferred 10 entities in 4.6 seconds [DEBUG 2010-03-30 03:23:09,194 adaptive_thread_pool.py] [Thread-1] Got work item [1161-1170] <cut> [DEBUG 2010-03-30 03:23:09,226 bulkloader.py] [Thread-3] [1151-1160] Transferred 10 entities in 4.2 seconds [DEBUG 2010-03-30 03:23:09,226 adaptive_thread_pool.py] [Thread-3] Got work item [1171-1180] [ERROR 2010-03-30 03:23:10,174 bulkloader.py] Retrying on non-fatal HTTP error: 503 Service Unavailable

    Read the article

  • How to write a cctor and op= for a factory class with ptr to abstract member field?

    - by Kache4
    I'm extracting files from zip and rar archives into raw buffers. I created the following to wrap minizip and unrarlib: Archive.hpp #include "ArchiveBase.hpp" #include "ArchiveDerived.hpp" class Archive { public: Archive(string path) { /* logic here to determine type */ switch(type) { case RAR: archive_ = new ArchiveRar(path); break; case ZIP: archive_ = new ArchiveZip(path); break; case UNKNOWN_ARCHIVE: throw; break; } } Archive(Archive& other) { archive_ = // how do I copy an abstract class? } ~Archive() { delete archive_; } void passThrough(ArchiveBase::Data& data) { archive_->passThrough(data); } Archive& operator = (Archive& other) { if (this == &other) return *this; ArchiveBase* newArchive = // can't instantiate.... delete archive_; archive_ = newArchive; return *this; } private: ArchiveBase* archive_; } ArchiveBase.hpp class ArchiveBase { public: // Is there any way to put this struct in Archive instead, // so that outside classes instantiating one could use // Archive::Data instead of ArchiveBase::Data? struct Data { int field; }; virtual void passThrough(Data& data) = 0; /* more methods */ } ArchiveDerived.hpp "Derived" being "Zip" or "Rar" #include "ArchiveBase.hpp" class ArchiveDerived : public ArchiveBase { public: ArchiveDerived(string path); void passThrough(ArchiveBase::Data& data); private: /* fields needed by minizip/unrarlib */ // example zip: unzFile zipFile_; // example rar: RARHANDLE rarFile_; } ArchiveDerived.cpp #include "ArchiveDerived.hpp" ArchiveDerived::ArchiveDerived(string path) { //implement } ArchiveDerived::passThrough(ArchiveBase::Data& data) { //implement } Somebody had suggested I use this design so that I could do: Archive archiveFile(pathToZipOrRar); archiveFile.passThrough(extractParams); // yay polymorphism! How do I write a cctor for Archive? What about op= for Archive? What can I do about "renaming" ArchiveBase::Data to Archive::Data? (Both minizip and unrarlib use such structs for input and output. Data is generic for Zip & Rar and later is used to create the respective library's struct.)

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >