Search Results

Search found 60636 results on 2426 pages for 'data export'.

Page 33/2426 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Using html5 localstorage to retrieve data from server, to allow editing of and appending to data sou

    - by Adam
    So, I'm creating a standard user-data collection form that will be used by standard web browsers, as well as the iPhone and iPad. The app will allow users to create new records, as well as edit and delete existing records. I've gotten the gist of using html5's 'localstorage' to create a client-side data source and am looking for direction for getting existing data from a server into a client-side data source, and then being able to edit or delete that data, or to append a new record to the existing data. Finally, being able to save the updated data back to the server. It sounds like a lot, I know. But I've been pouring through html5 tuts and can't seem to find exactly what I'm looking for.

    Read the article

  • WCF Data Services implementation strategies.

    - by Nix
    Microsoft has done a savvy job of not outlining the actual place for data services in the wonderful world of SOA/Web dev. So my question is simple, are WCF Data Services designed to be used via clients? Or has anyone ever heard of someone using them on the server side? Simple scenario a general layered architecture using BO business objects (parenthesis indicate what is being passed between layers) (XML) WCF Service - (BO)Business Logic - (BO) Dao - Entity Framework or using data services it would be where DS BO are modeled business entities to be used in data service. (XML) WCF Service -(BO) Business Logic - (BO) WCF Data Service - (DS BO)Server I can't see a use for the later, unless there are going to be a lot of cases people would be accessing your data via your Data Service Layer vs the Service layer? Thoughts anyone? I have not seen any mention of using DS from within a Service Layer....

    Read the article

  • How to use HTML-5 data-* attributes in ASP.NET MVC

    - by sleepy
    I am trying to use HTML5 data- attributes in my ASP.NET MVC 1 project. (I am a C# and ASP.NET MVC newbie.) <%= Html.ActionLink("« Previous", "Search", new { keyword = Model.Keyword, page = Model.currPage - 1}, new { @class = "prev", data-details = "Some Details" })%> The "data-details" in the above htmlAttributes give the following error: CS0746: Invalid anonymous type member declarator. Anonymous type members must be declared with a member assignment, simple name or member access. It works when I use data_details, but I guess it need to be starting with "data-" as per the spec. My questions: Is there any way to get this working and use HTML5 data attributes with Html.ActionLink or similar Html helpers ? Is there any other alternative mechanism to attach custom data to an element? This data is to be processed later by JS.

    Read the article

  • getting a blank data report vb6

    - by arvind
    Hi, I am new to vb6. I am working to create the invoice generation application. I am using data report to show the generated invoice. The step by step working of process is Entering the data in to Invoice and ItemsInvoice tables. Then getting the maxId using (Adodc) from the data base to show the last generated Invoice. Then passing the max Id as parameter to the data report which is showing the invoice according to the invoice id. It is working fine when I first time generate invoice. Now for 2nd invoice withou closing application I am getting a blank data report. For data report I am using dataenvironment. I am guessing the reason of blank data report is blank because there was no record for that Id. But actually the record is inserting in the database. Please help me.

    Read the article

  • java: how to compress data into a String and uncompress data from the String

    - by Guillaume
    I want to put some compressed data into a remote repository. To put data on this repository I can only use a method that take the name of the resource and its content as a String. (like data.txt + "hello world"). The repository is moking a filesystem but is not, so I can not use File directly. I want to be able to do the following: client send to server a file 'data.txt' server compress 'data.txt' into data.zip server send to repository content of data.zip repository store data.zip client download from repository data.zip and his able to open it with its favorite zip tool I have tried a lots of compressing example found on the web but each time a send the data to the repository, my resulting zip file is corrupted. Here is a sample class, using the zip*stream and that emulate the repository showcasing my problem. The created zip file is working, but after its 'serialization' it's get corrupted. (the sample class use jakarta commons.io ) Many thanks for your help. package zip; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; import org.apache.commons.io.FileUtils; /** * Date: May 19, 2010 - 6:13:07 PM * * @author Guillaume AME. */ public class ZipMe { public static void addOrUpdate(File zipFile, File ... files) throws IOException { File tempFile = File.createTempFile(zipFile.getName(), null); // delete it, otherwise you cannot rename your existing zip to it. tempFile.delete(); boolean renameOk = zipFile.renameTo(tempFile); if (!renameOk) { throw new RuntimeException("could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath()); } byte[] buf = new byte[1024]; ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile)); ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile)); ZipEntry entry = zin.getNextEntry(); while (entry != null) { String name = entry.getName(); boolean notInFiles = true; for (File f : files) { if (f.getName().equals(name)) { notInFiles = false; break; } } if (notInFiles) { // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(name)); // Transfer bytes from the ZIP file to the output file int len; while ((len = zin.read(buf)) > 0) { out.write(buf, 0, len); } } entry = zin.getNextEntry(); } // Close the streams zin.close(); // Compress the files if (files != null) { for (File file : files) { InputStream in = new FileInputStream(file); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(file.getName())); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file } tempFile.delete(); out.close(); } public static void main(String[] args) throws IOException { final String zipArchivePath = "c:/temp/archive.zip"; final String tempFilePath = "c:/temp/data.txt"; final String resultZipFile = "c:/temp/resultingArchive.zip"; File zipArchive = new File(zipArchivePath); FileUtils.touch(zipArchive); File tempFile = new File(tempFilePath); FileUtils.writeStringToFile(tempFile, "hello world"); addOrUpdate(zipArchive, tempFile); //archive.zip exists and contains a compressed data.txt that can be read using winrar //now simulate writing of the zip into a in memory cache String archiveText = FileUtils.readFileToString(zipArchive); FileUtils.writeStringToFile(new File(resultZipFile), archiveText); //resultingArchive.zip exists, contains a compressed data.txt, but it can not //be read using winrar: CRC failed in data.txt. The file is corrupt } }

    Read the article

  • jquery - access json data from php

    - by Cinaird
    I have a problem accessing JSON data. I'm new to JSON an jquery so there is probably a easy solution to it and i would be glad to find out. My jQuery: $.post( "currentPage.php", { 'currentPage': 1 }, function(data){ $("body").append(data); } ); currentPage.php: $returnArray['left'] = 'test_left'; $returnArray['right'] = 'test_right'; $returnArray['numLeft'][] = 1; $returnArray['numRight'][] = 2; $returnArray['numRight'][] = 3; print json_encode($returnArray); i tried to access the data like this: data.left data['left'] but it returns blank, how is the best way to access the data?

    Read the article

  • Java: versioned data structures?

    - by Jason S
    I have a data structure that is pretty simple (basically a structure containing some arrays and single values), but I need to record the history of the data structure so that I can efficiently get the contents of the data structure at any point in time. Is there a relatively straightforward way to do this? The best way I can think of would be to encapsulate the whole data structure with something that handles all the mutating operations by storing data in functional data structures, and then for each mutation operation caching a copy of the data structure in a Map indexed by time-ordering (e.g. a TreeMap with real time as keys, or a HashMap with a counter of mutation operations combined with one or more indexes stored in TreeMaps mapping real time / tick count / etc. to mutation operations) any suggestions?

    Read the article

  • JSON to Persistent Data Store (CoreData, etc.)

    - by Bryan Veloso
    All of the data on my application is pulled through an API via JSON. The nature of a good percentage of this data is that it doesn't change very often. So to go and make JSON requests to get a list of data that doesn't change much didn't seem all that appealing. I'm looking for the most sensible option to have this JSON saved onto the iPhone in some sort of persistent data store. Obviously one plus of persisting the data would be to provide it when the phone can't access the API. I've looked at a few examples of having JSON and CoreData interact, for example, but it seems that they only describe transforming NSManagedObjects into JSON. If I can transform JSON into CoreData, my only problem would be being able to change that data when the data from the API does change. (Or, maybe this is all just silly.)

    Read the article

  • best way to statistically detect anomalies in data

    - by reinier
    Hi, our webapp collects huge amount of data about user actions, network business, database load, etc etc etc All data is stored in warehouses and we have quite a lot of interesting views on this data. if something odd happens chances are, it shows up somewhere in the data. However, to manually detect if something out of the ordinary is going on, one has to continually look through this data, and look for oddities. My question: what is the best way to detect changes in dynamic data which can be seen as 'out of the ordinary'. Are bayesan filters (I've seen these mentioned when reading about spam detection) the way to go? Any pointers would be great! EDIT: To clarify the data for example shows a daily curve of database load. This curve typically looks similar to the curve from yesterday In time this curve might change slowly. It would be nice that if the curve from day to day changes say within some perimeters, a warning could go off. R

    Read the article

  • All words in a trie data-structure

    - by John Smith
    I'm trying to put all words in a trie in a string, a word is detonated by the eow field being true for a certain character in the trie data structure, hence a trie can could have letters than lead up to no word, for ex "abc" is in the trie but "c"'s eow field is false so "abc" is not a word Here is my Data structure struct Trie { bool eow; //when a Trie field isWord = true, hence there is a word char letter; Trie *letters[27]; }; and here is my attemped print-all function, basically trying to return all words in one string seperated by spaces for words string printAll( string word, Trie& data) { if (data.eow == 1) return word + " "; for (int i = 0; i < 26; i++) { if (data.letters[i] != NULL) printAll( word + data.letters[i]->letter, *(data.letters[i])); } return ""; } Its not outputting what i want, any suggestions?

    Read the article

  • Getting Database Data to the client side.

    - by Toby Allen
    I'm sure this has been asked a million times, but what is the accepted approach to this? I have been writing php code for a while and up until recently I only copied+pasted javascript code, but now with help from YUI I've begun to understand javascript and want to use it more in an existing web app I have. I want to get various amounts of data from databases etc to the clientside javascript. I have access to this data in my php pages when loading. What is the correct way to get this data to my client side script. Generate clientside javascript in my php or smarty template files inserting the data where I need it? Use an Ajax call to retrieve the information I need from a php file - returning JSON data? Generate a JSON data construct of the data needed by the page and either return it via a script inclusion or dumping it to the generated page? Something really obvious I haven't thought of.

    Read the article

  • what's the key different between data management and data governance?

    - by Sid Xing
    i just read some articles about these two theories, and i thought they have the similar goal, but DG is more about process management by follow some best practice. So my 1st question is about the difference between DG & DM. I'm confused. There're so many concepts around data management. Data quality, data security, data governance, data profiling, data integration, master data management, metadata management.... It seems like neither of them is EXACTLY separated, they're together. My 2nd question, or ask for your suggestion to help me better understand the relation between these concepts. Appreciate your help.

    Read the article

  • Possibly simple PHP/MYSQL issue with retrieving and showing data

    - by envoys
    I have been racking my brains over this for a while now. Here is the data i have in the sql data base as an example: ID | TYPE | DATA 1 | TXT | TEST 2 | PHP | php 3 | JS | JAVASCRIPT That is just an example, there are multiple listing for TXT, PHP and JS throughout the table. What I want to do is retrive all the data and display it all into separate drop down/select boxes. Meaning, select box one would list all data with type TXT, select box two would list all data with type PHP and select box 3 would list all data with type JS. The only way I have came about doing this is doing individual sql queries for each different type. I know there is a way to do it all in 1 query and then display it the way I want to but I just can't seem to figure out how and I know its going to drive me nuts when someone helps and I see just how they did it. Thanks for the input.

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • Data Recovery needed with a Sector Zero problem on a HDD

    - by Jay Robins
    Left my computer (running XP) copying files to an external drive. Came back a few hours later, and the laptop was pretty hot and had frozen (Ironic aint it). Forced a reboot, and the laptop HDD hasnt worked since. Set it up in an external enclosure, but XP cant mount the HDD to a drive letter, but is able to recognize the manufacturer and drive size (Hitachi-320GB). No noise or rattling when the drive is spinning, but I cant get anything off of it, since i cant mount it, or see much of anything. Computer repair shop ran some software tests and says it came back with a "Zero Sector bad" message, and need to send it to a professional data recovery service. Any other options or ideas, before I have to spend thousands of dollars to recover my data? Any help would be GREATLY appreciated!!! I'm desperate and a poor student! Thanks in Advance, -Jay

    Read the article

  • Data Archiving vs not

    - by Recursion
    For the sake of data integrity, is it wiser to archive your files or just leave them unarchived. No compression is being used. My thinking is that if you leave your files unarchived, if there is some form of corruption it will only hurt a smaller number of files. Though if you archive, lets say all of your documents, if there is even the slightest corruption, the entire archive is unrecoverable. So whats the best way to keep a clean file system, but not be subject to data corruption.

    Read the article

  • Excel Graph: How can I turn data below in to a 'time based' graph

    - by Mike
    In my spreadsheet I am collecting time periods when certain values have been changed. The user is restricted to 4 time periods. I would like to show the data based on thos time periods. I've included a mock up' of the data and the type of graph I would like to create. I've tried to create it for the last hour but am obviously missing something so thought I'd ask around. http://i48.tinypic.com/55lezr.jpg Many thanks for any help Mike P.S How do I make this image appear in the message and not as a link?

    Read the article

  • Why Standards Place Limits on Data Transfer Rates?

    - by Mehrdad
    This is a rather general question about hardware and standards in general: Why do they place limits on data transfer rates, and disallow manufacturers from exceeding those rates? (E.g. 100 Mbit/s for Wireless G, 150 Mbit/s for Wireless N, ...) Why not allow for some sort of handshake protocol, whereby the devices being connected agree upon the maximum throughput that they each support, and use that value instead? Why does there have to be a hard-coded limit, which would require a new standard for every single improvement to a data rate?

    Read the article

  • Data Archiving vs not

    - by Recursion
    For the sake of data integrity, is it wiser to archive your files or just leave them unarchived. No compression is being used. My thinking is that if you leave your files unarchived, if there is some form of corruption it will only hurt a smaller number of files. Though if you archive, lets say all of your documents, if there is even the slightest corruption, the entire archive is unrecoverable. So whats the best way to keep a clean file system, but not be subject to data corruption.

    Read the article

  • Converge Voice and Data networks using Sonicwall

    - by skinneejoe
    We are looking to converge VOIP and Data traffic onto a single wire so that our client's VOIP phones pass data through to the users computer. We are specing out a new Sonicwall NSA appliance to handle routing functions and layer 2 switches to manage VLANS. Not a huge network, medium sized. What should I know about converging the networks onto a single wire? Obviously I'll want to prioritize voice traffic, is this handled solely in the Sonicwall with QoS configurations or do the layer 2 switches need to be configured differently? Any other pitfalls I should be aware of, or any good resources for learning more?

    Read the article

  • Does data mining qualify as an abuse?

    - by Hybryd
    Hi all, today I had a strange experience with my ISP. They disabled my password for internet connection, and when I called them, they enabled it again, but they didn't say why it happened. In the last couple of days I was running a data mining that I made for one forum to get some useful info about business that I'm in. So I thought, maybe my ISP figured that 10,000 page requests in couple of hours to the same site may be some kind of attack. What do you think, does it qualify as an attack? Is it even ok to data mine in that way?

    Read the article

  • Excel data range - to sum series within date range

    - by Mark
    I have a set of data that I would like to manipulate but my problem is not straight forward. In this data I have date ranges that include multiple entries of the same date on some days and not on others. What I need to accomplish is to manage a trading account so that no more than 1% of the account is put at risk on any given day (retrospectively). To do this, when a series of trades falls on the same day, I need to total the risk associated with each of those trades so that I can limit the total risk of the combined trades by limiting the position size I take in each. Here is a sample set of the data I am working with. As you can see, there are 5 trades on Jan 3. Each of these trades comes with a risk value. I need to add the risk values of these 5 trades so that I can compare it to an account value and then determine if I should take more than 1 position in each trade. As you can see there are different numbers of trades that occur on the 4th, 5th 6th and 9th. I need the values returned in each row so that I can further manipulate them in the spreadsheet. I am not new to Excel, but cannot come up with a solution here - your input is much appreciated. Forgive the presentation below - I cannot upload a pic (new user) and the format does not carry across from excel. I have aligned the first several lines manually. Thx. Date ............. Pair ....... L/S ...... Initial Risk .......Win ......Loss ....BE. ....Avg Gain Avg Loss pips/swing 1/3/2012 ....EUR/USD ....S .............15 ................1 ..................................10 ..........................15. .. 1/3/2012 ....USD/CHF .....L ............15 ..........................................1 ..........0 1/3/2012 ....AUD/USD ....S .............15 ................1 .................................16 ...........................18 1/3/2012 ....NZD/USD ....S .............15 ................1 ...................................7 .............................8 1/3/2012 ....AUD/JPY .... S .............10 ................1 .................................25 ............................20 1/4/2012 ....EUR/USD ....L .............20 ................1 .................................19 ...........................19 1/4/2012 ....USD/CHF ....S ............ 15 ................1 .................................17 ...........................20 1/4/2012 EUR/JPY L 20 1 0 1/5/2012 EUR/USD L 15 1 10 20 1/5/2012 GBP/USD L 20 1 15 20 1/5/2012 USD/CHF S 15 1 0 1/5/2012 USD/JPY S 10 1 7 10 1/5/2012 USD/CAD S 15 1 28 36 1/5/2012 AUD/USD L 15 1 20 20 1/6/2012 USD/CAD S 15 1 5 -10 1/6/2012 EUR/JPY L 15 1 7 7 1/9/2012 AUD/USD S 15 1 22 30 1/9/2012 NZD/USD S 15 1 10 15

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >