Search Results

Search found 60932 results on 2438 pages for 'data operations'.

Page 385/2438 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • cached data base

    - by radi
    hi , in my project i need a tow tables each of it has about 2000 row , i want my application to be speed so my db should load into memory (cached) when the app start and before it close the db have to be saved on the disk . i am using java and i want to use sql

    Read the article

  • changing WCF endpoint does not persist data.

    - by Vinay Pandey
    Hi All, I have an application that has reference of a WCF service on machine A, now on certain situation I want tu use similar service hosted on machine B. When I changed the endpoint using following:- EndpointAddress endpoint = new EndpointAddress(new Uri(ConfigurationManager.AppSettings["ServiceURLForMachineB"])); BasicHttpBinding binding = new BasicHttpBinding(); binding.SendTimeout = TimeSpan.FromMinutes(1); binding.OpenTimeout = TimeSpan.FromMinutes(1); binding.CloseTimeout = TimeSpan.FromMinutes(1); binding.ReceiveTimeout = TimeSpan.FromMinutes(10); binding.AllowCookies = false; binding.BypassProxyOnLocal = false; binding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard; binding.MessageEncoding = WSMessageEncoding.Mtom; binding.TextEncoding = System.Text.Encoding.UTF8; binding.TransferMode = TransferMode.Buffered; binding.UseDefaultWebProxy = true; repositoryService = new WorkflowRepositoryServiceClient(binding, endpoint); When I call login method although method is called from machine B, but username and password in Login(string username,string password) are coming null on machine B. Any Idea what I am doing wrong here?

    Read the article

  • Creating a [materialised]view from generic data in Oracle/Mysql

    - by Andrew White
    I have a generic datamodel with 3 tables CREATE TABLE Properties ( propertyId int(11) NOT NULL AUTO_INCREMENT, name varchar(80) NOT NULL ) CREATE TABLE Customers ( customerId int(11) NOT NULL AUTO_INCREMENT, customerName varchar(80) NOT NULL ) CREATE TABLE PropertyValues ( propertyId int(11) NOT NULL, customerId int(11) NOT NULL, value varchar(80) NOT NULL ) INSERT INTO Properties VALUES (1, 'Age'); INSERT INTO Properties VALUES (2, 'Weight'); INSERT INTO Customers VALUES (1, 'Bob'); INSERT INTO Customers VALUES (2, 'Tom'); INSERT INTO PropertyValues VALUES (1, 1, '34'); INSERT INTO PropertyValues VALUES (2, 1, '80KG'); INSERT INTO PropertyValues VALUES (1, 2, '24'); INSERT INTO PropertyValues VALUES (2, 2, '53KG'); What I would like to do is create a view that has as columns all the ROWS in Properties and has as rows the entries in Customers. The column values are populated from PropertyValues. e.g. customerId Age Weight 1 34 80KG 2 24 53KG I'm thinking I need a stored procedure to do this and perhaps a materialised view (the entries in the table "Properties" change rarely). Any tips?

    Read the article

  • How should my application keep clients in sync with schema changes to HTML5 databases?

    - by Chad Johnson
    I'm wanting to incorporate HTML5 database storage into my web application to make it online-accessible. I've done lots of development in server-side environments with databases, and we all know that database schema additions and modifications are often necessary. I am wondering what should happen if my application uses an offline database schema, and that schema changes. How do I prevent the application from breaking on the client side? How do I ensure the database is always up to date on the client end? Anyone have any solutions?

    Read the article

  • How to convert an application to be a database independent app ?

    - by Eslam
    I have a java web application that has informix DB as it's back end database , now i take a decision to make my app work with SqlServer so i changed all the informix related syntax into SqlServer, and i may take a decision in the future to switch into oracle so the pain will be repeated again and again, as a result i decided to make my application a DataBase independent one that's able to work with any DB vendor smoothly, but i have no idea till now about how to do that, so your ideas is welcomed.

    Read the article

  • algorithm for checking addresses for matches?

    - by user151841
    I'm working on a survey program where people will be given promotional considerations the first time they fill out a survey. In a lot of scenarios, the only way we can stop people from cheating the system and getting a promotion they don't deserve is to check street address strings against each other. I was looking at using levenshtein distance to give me a number to measure similarity, and consider those below a certain threshold a duplicate. However, if someone were looking to game the system, they could easily write "S 5th St" instead of "South Fifth Street", and levenshtein would consider those strings to be very different. So then I was thinking to convert all strings to a 'standard address form' i.e. 'South' becomes 's', 'Fifth' becomes '5th', etc. Then I was thinking this is hopeless, and too much effort to get it working robustly. Is it? I'm working with PHP/MySql, so I have the limitations inherent in that system.

    Read the article

  • Debugging stack data not assigned to a named variable

    - by gibbss
    Is there a way to view stack elements like un-assigned return values or exceptions that not assigned to a local variable? (e.g. throw new ...) For example, suppose I have code along the lines of: public String foo(InputStream in) throws IOException { NastyObj obj = null; try { obj = new NastyObj(in); return (obj.read()); } finally { if (obj != null) obj.close(); } } Is there any way to view the return or exception value without stepping to a higher level frame where it is assigned? This is particularly relevant with exceptions because you often have to step back up through a number of frames to find an actual handler. I usually use the Eclipse debugging environment, but any answer is appreciated. Also, if this cannot be done, can you explain why? (JVM, JPDA limitation?)

    Read the article

  • Loading an NSArray on iPhone from a dynamic data base url in PHP or ASP

    - by Brad
    I have a database online that I would like to be able to load into an NSArray in my app. I can use arrayWithContentsOfURL with a static file, but I really need to go to a url that generates a plist file from the database and loading it into the array. I can use ASP or PHP. I tried setting the response type to "text/xml", but that doesn't help. Any thoughts?

    Read the article

  • Should a Trim method generally in the Data Access Layer or with in the Domain Layer?

    - by jpierson
    I'm dealing with a database that contains data with inconsistencies such as white leading and trailing white space. In general I see a lot of developers practice defensive coding by trimming almost all strings that come from the database that may have been entered by a user at some point. In my oppinoin it is better to do such formating before data is persisted so that it is done only once and then the data can be in a consistent and reliable state. Unfortunatley this is not the case however which leads me to the next best solution, using a Trim method. If I trim all data as part of my data access layer then I don't have to concern myself with defensive trimming within the business objects of my domain layer. If I instead put the trimming responsibility in my business objects, such as with set accessors of my C# properties, I should get the same net results however the trim will be operating on all values assigned to my business objects properties not just the ones that come from the inconsistent database. I guess as a somewhat philisophical question that may determine the answer I could ask "Should the domain later be responsible for defensive/coercive formatting of data?" Would it make sense to have a set accessor for a PhoneNumber property on a business object accept a unformatted or formatted string and then attempt to format it as required or should this responsibility be pushed to the presentation and data access layers leaving the domain layer more strict in the type of data that it will accept? I think this may be the more fundamental question. Update: Below are a few links that I thought I should share about the topic of data cleansing. Information service patterns, Part 3: Data cleansing pattern LINQ to SQL - Format a string before saving? How to trim values using Linq to Sql?

    Read the article

  • Replacing multiple patterns in a block of data

    - by VikrantY
    Hi All, I need to find the most efficient way of matching multiple regular expressions on a single block of text. To give an example of what I need, consider a block of text: "Hello World what a beautiful day" I want to replace Hello with "Bye" and "World" with Universe. I can always do this in a loop ofcourse, using something like String.replace functions availiable in various languages. However, I could have a huge block of text with multiple string patterns, that I need to match and replace. I was wondering if I can use Regular Expressions to do this efficiently or do I have to use a Parser like LALR. I need to do this in JavaScript, so if anyone knows tools that can get it done, it would be appreciated.

    Read the article

  • Strategy to structure a search index in a relational database

    - by neilc
    I am interested in suggestions for building an efficient and robust structure for indexing products in a new database I am building (i'm using MySql) When a product is entered through the form there are three parts I am interested in indexing for searching purposes. The product title The product description Tags The most important is title, followed by tags, followed by the description. I was thinking of using the following structure CREATE TABLE `searchindex` ( `id` INT NOT NULL , `word` VARCHAR( 255 ) NOT NULL , `weighting` INT NOT NULL , `product_id` INT NOT NULL , PRIMARY KEY ( `id` ) ) Then each time a product is created I would split apart the title, description and tags (removing common words) and award them a weighting. Then it is trivial to select out the words and corresponding products and order them by weighting. Is there a better way to do this? I would be worried that this strategy would slow down over time and as the database filled up.

    Read the article

  • how to use trigger in Jquery to pass parameter data as well as control (this)

    - by Shantanu Gupta
    I am trying to use same validation funtion for all my controls. But I dont know much in jquery and not been able to pass event handler to the trigger. I want to pass textbox id to my custom function. How can i do this <script src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript"> $(document).ready(function(){ $("#txtFirstName").bind('focusout', function() { $(this).trigger("customblurfocus",[$(this).val(), $(this).val() + " Required !", "Ok !"]); } ); $(this).bind('customblurfocus',function(defaultInputValue,ErrorMessage,ValidMessage) {alert($(this).val()); if($(this).val()=="" || $(this).val()==defaultInputValue) { $(this).val(defaultInputValue); $(this).siblings("span[id*='error']").toggleClass("error_set").text(ErrorMessage).fadeOut(3000); } else { $(this).siblings("span[id*='error']").toggleClass("error_ok").text(ValidMessage).show(1000); } } ); }); </script>

    Read the article

  • GZipStream in an WebHttpResponse producing no data

    - by Pierre 303
    I want to compress my HTTP Responses for client that supports it. Here is the code used to send a standard response: IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } The code above works fine. Now I added gzip support below. When I test it with a browser that supports gzip or a custom method, it returns an empty string. I'm sure I'm missing something simple, but I just can't find it... IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string acceptEncoding = request.Headers["Accept-Encoding"]; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); if (acceptEncoding != null && acceptEncoding.Contains("gzip")) { byte[] bytes = ASCIIEncoding.ASCII.GetBytes(responseBody); response.AddHeader("Content-Encoding", "gzip"); using (GZipStream writer = new GZipStream(response.Body, CompressionMode.Compress)) { writer.Write(bytes, 0, bytes.Length); writer.Flush(); response.Send(); } } else { using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } }

    Read the article

  • When HDD becomes full, how to create a symbolic link to the data store on another disk?

    - by Brij Raj Singh
    I have a Linux Ubuntu machine which has an X GB hard disk. There is folder, say, /opt/software/data. The disk /dev/sda1 is almost full and I have attached another disk at /dev/sda2 which is mounted at /hdd2. Is it possible for me to link the folders /opt/software/data with /hdd2/software/data so, that every file get stored in the /hdd2/software/data but may be referred from the /opt/software/data? I can't do a reinstall of the software that creates this data, to change the default location of storage.

    Read the article

  • Fact table with multiple facts

    - by Jeff Meatball Yang
    I have a dimension (SiteItem) has two important facts: perUserClicks perBrowserClicks however, within this dimension, I have groups of dimensions based on an attribute column (let's call the groups AboveFoldItems, LeftNavItems, OnTheFlyItems, etc.) each have more facts that are specific to that group: AboveFoldItems: eyeTime, loadTime LeftNavItems: mouseOverTime OnTheFlyItems: doesn't have any extra, but may in the future Is the following fact table schema ok? DateKey SessionKey SiteItemKey perUserClicks perBrowserClicks eyeTime loadTime mouseOverTime It seems a little wasteful since only some columns pertain to some dimension keys (the irrelevant facts are left NULL). But... this seems like it would be a common problem, so there should be a common solution for this, right?

    Read the article

  • Can I configure AutoMapper to read from custom column names when mapping from IDataReader?

    - by rohancragg
    Psuedo code for mapping configuration (as below) is not possible since the lambda only lets us access Type IDataReader, wheras when actually mapping, AutoMapper will reach into each "cell" of each IDataRecord while IDataReader.Read() == true: var mappingConfig = Mapper.CreateMap<IDataReader, IEnumerable<MyDTO>>(); mappingConfig.ForMember( destination => destination.???, options => options.MapFrom(source => source.???)); Can anyone think of a way to do this using AutoMapper configuration at runtime or just some other dynamic approach that meets the requirement below. The requirement is to support any incoming IDataReader which may have column names that don't match the property names of MyDTO and there is no naming convention I can rely on. Instead we'll ask the user at runtime to cross-reference the expected column names with the actual column names found in the IDataReader via IDataReader.GetSchemaTable().

    Read the article

  • API for displaying graphs in a web app?

    - by Rosarch
    I have graphs (nodes/arcs) that I want to display to the user. I need to be able to capture the event of a user clicking on a node. Something like Google Charts API's Simple Org Chart would be great, but it looks like it only supports trees. What other service can I use? Or should I hack something ugly together using Google Charts?

    Read the article

  • Using awk to return only certain chunks of data

    - by Koriar
    I'm not 100% certain how to phrase my question simply, so I apologize if this has been answered somewhere and I was just unable to find it. What I have are debug logs with authentication packets in them along with a bunch of other output. I need to search through about 2 million lines of logs to find every packet that contains a certain mac address. The packets look something like this (slightly censored): -----------------[ header ]----------------- Event: Authd-Response (1900) Sequence: -54 Timestamp: 1969-12-31 19:30:00 (0) ---------------[ attributes ]--------------- Auth-Result = Auth-Accept Service-Profile-SID = 53 Service-Profile-SID = 49 RADIUS-Access-Accept-Attr/WiMAX-Capability = 0x(numbers) Session-Timeout = 3600 Service-Profile-SID = 4 Service-Profile-SID = 29 Chargeable-User-Identity = "(Numbers)" User-Password = "(the MAC address I'm looking for)" -------------------------------------------- However there are about 10 different possible types with different possible lengths. They all start with the header line and end with the all-dashes line. I've had success using awk to get the code blocks themselves using this: awk '/-----------------\[ header \]-----------------/,/--------------------------------------------/' filename.txt But I was hoping to be able to use it to return only the packets which contain the MAC address that I need. I've been trying to figure this out for a few days now and I'm pretty stuck. I could try and write a bash script, but I could swear that I've used awk to do something like this before...

    Read the article

  • JSF Open new window and display bean data

    - by JSFPRINTER
    I am running JSF 2.0 and the latest version of Primefaces 2.2RC1 I believe. I am trying to create a printer friendly window. When the user clicks on a p:commandLink I want a new window to open and display a xhtml file I have named printView.xhtml. Now I can get the window working fine using JavaScript window.open but when I open the new window it will not render any values it just displays everything as #{myBean.value}. Does anyone know how to properly open a window and extend the current scope of the application into that window so I can call all of my managed beans properly and display the values etc. etc.

    Read the article

  • Subset and lagging list data structure R

    - by user1234440
    I have a list that is indexed like the following: >list.stuff [[1]] [[1]]$vector ... [[1]]$matrix .... [[1]]$vector [[2]] null [[3]] [[3]]$vector ... [[3]]$matrix .... [[3]]$vector . . . Each segment in the list is indexed according to another vector of indexes: >index.list 1, 3, 5, 10, 15 In list.stuff, only at each of the indexes 1,3,5,10,15 will there be 2 vectors and one matrix; everything else will be null like [[2]]. What I want to do is to lag like the lag.xts function so that whatever is stored in [[1]] will be pushed to [[3]] and the last one drops off. This also requires subsetting the list, if its possible. I was wondering if there exists some functions that handle list manipulation. My thinking is that for xts, a time series can be extracted based on an index you supply: xts.object[index,] #returns the rows 1,3,5,10,15 From here I can lag it with: lag.xts(xts.object[index,]) Any help would be appreciated thanks: EDIT: Here is a reproducible example: list.stuff<-list() vec<-c(1,2,3,4,5,6,7,8,9) vec2<-c(1,2,3,4,5,6,7,8,9) mat<-matrix(c(1,2,3,4,5,6,7,8),4,2) list.vec.mat<-list(vec=vec,mat=mat,vec2=vec2) ind<-c(2,4,6,8,10) for(i in ind){ list.stuff[[i]]<-list.vec.mat }

    Read the article

  • NSPredicate cause update editing to return NSFetchedResultsChangeDelete not NSFetchedResultsChangeUp

    - by Matthew Weiss
    I have predicate inside of - (NSFetchedResultsController *)fetchedResultsController in a standard way starting from the CoreDataBook example. NSPredicate *predicate = [NSPredicate predicateWithFormat:@"state=%@ && date = %@ && date < %@", @"1",fromDate,toDate]; [fetchRequest setPredicate:predicate]; This works fine however when editing an item, it returns with NSFetchedResultsChangeDelete not Update. When the main view returns, it is missing the item. If I restart the simulator the delete was not saved and the correct editing result is shown the the predicate working correctly. case NSFetchedResultsChangeDelete: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; I can confirm the behavior by commenting out the two predicate lines ONLY and then all works as it should correctly returning with the full set after editing and calling NSFetchedResultsChangeUpdate instead of NSFetchedResultsChangeDelete. I have read http://matteocaldari.it/2009/11/multiple-contexts-controllers-delegates-and-coredata-bug who reports similar behavior but I have not found a work around to my problem. I can

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • array of structures, or structure of arrays?

    - by Jason S
    Hmmm. I have a table which is an array of structures I need to store in Java. The naive don't-worry-about-memory approach says do this: public class Record { final private int field1; final private int field2; final private long field3; /* constructor & accessors here */ } List<Record> records = new ArrayList<Record>(); If I end up using a large number ( 106 ) of records, where individual records are accessed occasionally, one at a time, how would I figure out how the preceding approach (an ArrayList) would compare with an optimized approach for storage costs: public class OptimizedRecordStore { final private int[] field1; final private int[] field2; final private long[] field3; Record getRecord(int i) { return new Record(field1[i],field2[i],field3[i]); } /* constructor and other accessors & methods */ } edit: assume the # of records is something that is changed infrequently or never I'm probably not going to use the OptimizedRecordStore approach, but I want to understand the storage cost issue so I can make that decision with confidence. obviously if I add/change the # of records in the OptimizedRecordStore approach above, I either have to replace the whole object with a new one, or remove the "final" keyword. kd304 brings up a good point that was in the back of my mind. In other situations similar to this, I need column access on the records, e.g. if field1 and field2 are "time" and "position", and it's important for me to get those values as an array for use with MATLAB, so I can graph/analyze them efficiently.

    Read the article

  • Packing differently sized chunks of data into multiple bins

    - by knizz
    EDIT: It seems like this problem is called "Cutting stock problem" I need an algorithm that gives me the (space-)optimal arrangement of chunks in bins. One way would be put the bigger chunks in first. But see how that algorithm fails in this example: Chunks Bins ----------------------------- AAA BBB CC DD ( ) ( ) Algorithm Result ----------------------------- biggest first (AAABBB ) (CC ) optimal (AAACCDD) (BBB) "Biggest first" can't fit in DD. Maybe it helps to build a table like this: Size 1: --- Size 2: CC, DD Size 3: AAA, BBB Size 4: CCDD Size 5: AAACC, AAADD, BBBCC, BBBDD Size 6: AAABBB Size 7: AAACCDD, BBBCCDD Size 8: AAABBBCC, AAABBBDD Size 10: AAABBBCCDD

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >