Search Results

Search found 59975 results on 2399 pages for 'data comparison'.

Page 678/2399 | < Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >

  • Partner case - ISE (Germany) - IDS brings light into Investment Controlling with Exadata

    - by Javier Puerta
    (Original post in German: IDS bringt mit Exadata Licht ins Investmentcontrolling) "The amount of data that IDS GmbH (Analysis and Reporting Services) has to cope with daily, is enormous: at the subsidiary of Allianz SE all the services are around Investment Controlling.The company needed an extensible data warehouse solution in which all the data could be merged together, harmonized and enriched. Finally IDS decided for Exadata to be as optimal solution, specifically the Oracle Exadata Database Machine. The implementation was carried out jointly with the Oracle Platinum Partner ISE, who took over the technical and advisory support part and will be IDS´ preffered consultant in any further Exadata development. See how Exadata is used and why this investment has paid off for IDS, by watching watching the following video (in German)"

    Read the article

  • why my website not ranking in first page of google? [on hold]

    - by India SEO Analyst
    Iam handling website www.usamovingandstorage.com and targeting keywords "chicago movers", but my website is on third page. But my website has nice backlink, and recently i removed irrelevant backlinks also. I compared my competitors' websites such as www.ampolmoving.com, www.chicagomovers.com and have no such big backlinks, but they are ranking first page in google. I compared the three websites in www.opensiteexplorer.org. In that my site has good results. Then How is it happened? I need full comparison, why my site is ranking in third page? what are the actions i need to take to rank in first page.

    Read the article

  • How to use Ubuntu Touch manage-address-books.py?

    - by Rotary Heart
    Well I have been reading the docs for a few days and I found that I can "import" my contacts from a .csv file with the following: Alternatively you can import contacts from a csv file. The csv file should be in same format as /usr/share/demo-assets/contacts-data/data.csv. Replace the sample data.csv file with your own version and run manage-address-books.py create to import your contacts. But I can't figure out how to use manage-address-books.py create could anyone help me? I know that I can use syncevolution, but I want to sync my .csv file too.

    Read the article

  • SQL binary value to PHP variable leading zeros

    - by Agony
    Using sql query to pull data from a mssql database results in a value that still has leading zeros. The data in database is stored as binary(13) - so it will pull all 13 digits. However the value is a text, so any leading zeros will generally show up as '?' in a form on the site - and in return will update wrong data to the database later. So what i need is to only select/display the text itself, not all 13 bytes. using: SELECT CONVERT(char,uilock_pw) AS uipwd FROM tbl_UserAccount or SELECT uilock_pw FROM tbl_UserAccount still adds the leading zeros to the char array. Example in database: 0x71776531323300000000000000 Would show up as: qwe123??????? But should be: qwe123 Im not even sure what character those ? represent. Using Echo results in a normal qwe123 - but not in a form.

    Read the article

  • Exploring TCP throughput with DTrace (2)

    - by user12820842
    Last time, I described how we can use the overlap in distributions of unacknowledged byte counts and send window to determine whether the peer's receive window may be too small, limiting throughput. Let's combine that comparison with a comparison of congestion window and slow start threshold, all on a per-port/per-client basis. This will help us Identify whether the congestion window or the receive window are limiting factors on throughput by comparing the distributions of congestion window and send window values to the distribution of outstanding (unacked) bytes. This will allow us to get a visual sense for how often we are thwarted in our attempts to fill the pipe due to congestion control versus the peer not being able to receive any more data. Identify whether slow start or congestion avoidance predominate by comparing the overlap in the congestion window and slow start distributions. If the slow start threshold distribution overlaps with the congestion window, we know that we have switched between slow start and congestion avoidance, possibly multiple times. Identify whether the peer's receive window is too small by comparing the distribution of outstanding unacked bytes with the send window distribution (i.e. the peer's receive window). I discussed this here. # dtrace -s tcp_window.d dtrace: script 'tcp_window.d' matched 10 probes ^C cwnd 80 10.175.96.92 value ------------- Distribution ------------- count 1024 | 0 2048 | 4 4096 | 6 8192 | 18 16384 | 36 32768 |@ 79 65536 |@ 155 131072 |@ 199 262144 |@@@ 400 524288 |@@@@@@ 798 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3848 2097152 | 0 ssthresh 80 10.175.96.92 value ------------- Distribution ------------- count 268435456 | 0 536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 1073741824 | 0 unacked 80 10.175.96.92 value ------------- Distribution ------------- count -1 | 0 0 | 1 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 21 16384 | 36 32768 |@ 78 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5391 131072 | 0 swnd 80 10.175.96.92 value ------------- Distribution ------------- count 32768 | 0 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 131072 | 0 Here we are observing a large file transfer via http on the webserver. Comparing these distributions, we can observe: That slow start congestion control is in operation. The distribution of congestion window values lies below the range of slow start threshold values (which are in the 536870912+ range), so the connection is in slow start mode. Both the unacked byte count and the send window values peak in the 65536-131071 range, but the send window value distribution is narrower. This tells us that the peer TCP's receive window is not closing. The congestion window distribution peaks in the 1048576 - 2097152 range while the receive window distribution is confined to the 65536-131071 range. Since the cwnd distribution ranges as low as 2048-4095, we can see that for some of the time we have been observing the connection, congestion control has been a limiting factor on transfer, but for the majority of the time the receive window of the peer would more likely have been the limiting factor. However, we know the window has never closed as the distribution of swnd values stays within the 65536-131071 range. So all in all we have a connection that has been mildly constrained by congestion control, but for the bulk of the time we have been observing it neither congestion or peer receive window have limited throughput. Here's the script: #!/usr/sbin/dtrace -s tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @cwnd["cwnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd); @ssthresh["ssthresh", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd_ssthresh); @unacked["unacked", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); @swnd["swnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } One surprise here is that slow start is still in operation - one would assume that for a large file transfer, acknowledgements would push the congestion window up past the slow start threshold over time. The slow start threshold is in fact still close to it's initial (very high) value, so that would suggest we have not experienced any congestion (the slow start threshold is adjusted when congestion occurs). Also, the above measurements were taken early in the connection lifetime, so the congestion window did not get a changes to get bumped up to the level of the slow start threshold. A good strategy when examining these sorts of measurements for a given service (such as a webserver) would be start by examining the distributions above aggregated by port number only to get an overall feel for service performance, i.e. is congestion control or peer receive window size an issue, or are we unconstrained to fill the pipe? From there, the overlap of distributions will tell us whether to drill down into specific clients. For example if the send window distribution has multiple peaks, we may want to examine if particular clients show issues with their receive window.

    Read the article

  • Get/Post Controller Logic Best Practice

    - by Brian Mains
    In an ASP.NET MVC project (Razor), I have a Get request, which loads two properties on a model, dependent on the property passed into the action method. So if the parameter has a value, the Group property is supplied data. But if not, the Groups collection property is supplied data. In the post action method, when I process the data, to repopulate the view, I have to provide similar logic, and could getaway with returning Action(param) (the get response) to the caller. My question is, based on experience, is that a good practice to get into? I see some downsides to doing that, but adds the lack of code redundancy. Or is there a better alternative?

    Read the article

  • How to interpret Google's "Avg. Page Load Time"?

    - by hawbsl
    Is there any industry rule of thumb for what's considered an unacceptable load time v. an OK one v. a blistering fast one? We're just reviewing some Google Analytics data and getting 0.74 Avg. Page Load Time reported. I guess that's OK. However it would be good if some meatier comparison data were available, or a blog post, or somewhere where there's some analysis of what speeds are generally being achieved by various kinds of sites. Any useful links to help someone interpret these speeds? If you Google it you just get a lot of results dealing with how to improve your speed. We're not at that stage yet.

    Read the article

  • Hash Function Added To The PredicateEqualityComparer

    - by Paulo Morgado
    Sometime ago I wrote a predicate equality comparer to be used with LINQ’s Distinct operator. The Distinct operator uses an instance of an internal Set class to maintain the collection of distinct elements in the source collection which in turn checks the hash code of each element (by calling the GetHashCode method of the equality comparer) and only if there’s already an element with the same hash code in the collection calls the Equals method of the comparer to disambiguate. At the time I provided only the possibility to specify the comparison predicate, but, in some cases, comparing a hash code instead of calling the provided comparer predicate can be a significant performance improvement, I’ve added the possibility to had a hash function to the predicate equality comparer. You can get the updated code from the PauloMorgado.Linq project on CodePlex,

    Read the article

  • Do any database "styles" use discrete files for their tables?

    - by Brad
    I've been talking to some people at work who believe some versions of a database store their data in discrete tables. That is to say you might open up a folder and see one file for each table in the database then several other supporting files. They do not have a lot of experience with databases but I have only been working with them for a little over a half year so I am not a canonical source of info either. I've been touting the benefits of SQL Server over Access (and before this, Access over Excel. Great strides have been made :) ). But, other people were of the impression that the/one of the the benefit(s) of using SQL Server over Access was that all the data was not consolidated down into one file. Yet, SQL Server packs everything into a single .mdf file (plus the log file). My question is, is there an RDBMS which holds it's data in multiple discrete files instead of one master file? And if the answer is yes, why do it one way over the other?

    Read the article

  • cVidya’s MoneyMap Achieves Oracle Exadata Optimized Status

    - by Javier Puerta
    cVidya's MoneyMap running on Oracle Exadata provides extreme performance, including 4x-16x improvement in high data load rates, 4x faster data transformation and reconciliation, and query speeds - from a 2.5 billion record index –  improved from hours to few seconds! The MoneyMap solution enables operators to reconcile information from all network, operations and business support systems and through an on-going automated process, it detects problem areas which impact profitability as a result of revenue leakage, data inconsistencies or resources that are not being used efficiently. Once detected, MoneyMap provides tools to promptly correct and manage the problems to achieve profit maximization Learn more here.

    Read the article

  • Is there a way to develop desktop software using PHP?

    - by user1492018
    I have to develop a real estate marketing CRM software for my client - where the application is installed on desktop but can also be accessed from web. 2 reasons why they want the application to run from desktop : So that it can work with/without internet connection They don't want their complete data to be online They want to access few of the data like property listing & inquiries (managed from desktop application) from their website through secure login & password. The data that is entered in desktop application should be automatically synchronized with the website application. I was wondering if there is a way to develop this kind of software using PHP & MySQL. If yes, it will be great if anyone can provide me the referral link.Else please suggest, which language should I use.

    Read the article

  • Desktop GUI Client - Remote RDBMS communication

    - by magom001
    Sorry if I am asking a trivial question but I have been searching for a while without any luck. I need to design a system and I am looking for advice on the technology that should be used. The layout is very simple: it is a sales application with a centralized database and multiple clients. Each salesperson has GUI app installed on his/her laptop that should be able to connect to the database to retrieve data and upload data (i.e. register new orders). My question is the following: how should the communication between the client and the server be implemented? I doubt that connecting directly to the RDBMS is a good idea... Should I use web-services? XML-RPC? How to implement authentication and encode the data? Thanks for your advice!

    Read the article

  • will any one solve my GOOGLE ranking Confusion? [on hold]

    - by India SEO Analyst
    Iam handling website www.usamovingandstorage.com and targeting keywords "chicago movers", but my website is on third page. But my website has nice backlink, and recently i removed irrelevant backlinks also. I compared my competitors' websites such as www.ampolmoving.com, www.chicagomovers.com and have no such big backlinks, but they are ranking first page in google. I compared the three websites in www.opensiteexplorer.org. In that my site has good results. Then How is it happened? I need full comparison, why my site is ranking in third page? what are the actions i need to take to rank in first page.

    Read the article

  • Generic Handler vs Direct Reference

    - by JNF
    In a project where I'm working on the data access layer I'm trying to make a decision how to send data and objects to the next layer (and programmer). Is it better to tell him to reference my dll, OR should I build a generic handler and let him take the objects from there (i.e. json format) If I understand correctly, In case of 2. he would have to handle the objects on his own, whereas in case 1. he will have the entities I've built. Note: It is very probable that other people would need to take the same data, though, we're not up to that yet. Same question here - should I make it into a webservice, or have them access the handler?

    Read the article

  • What is Rails way to save images?

    - by user
    I develop on iOS, and I'm switching from a PHP backend to Ruby on Rails. The interchange format is JSON. A quick Google search for 'save images in Rails' has nearly every result talking about saving image data as blobs to the database. I might be mistaken, but I'm under the impression that saving image data in a database is a huge waste of time and space (as opposed to saving a link to the file location ('/img/subcat/4656.png'). In PHP, it's pretty standard to receive the data, generate a filename, then save that file to disk, and then update the database with the image's location on disk. Is this the same for Rails, or is there some built-in ActiveRecord image functionality I'm not aware of?

    Read the article

  • using "IS" is better or checking for "NOT NULL"

    - by BDotA
    In C#.NET language: This style of coding is recommended or the one below it? if (sheet.Models.Data is GroupDataModel) { GroupDataModel gdm = (GroupDataModel)sheet.Models.Data; Group group = gdm.GetGroup(sheet.ActiveCell.Row.Index); if (group!=null && controller != null) { controller.CheckApplicationState(); } } or this one: var gdm = sheet.Models.Data as GroupDataModel; if (gdm != null) { Group group = gdm.GetGroup(sheet.ActiveCell.Row.Index); if (@group!=null && controller != null) { controller.CheckApplicationState(); } }

    Read the article

  • Which is a better design pattern for a database wrapper: Save as you go or Save when your done?

    - by izuriel
    I know this is probably a bad way to ask this question. I was unable to find another question that addressed this. The full question is this: We're producing a wrapper for a database and have two different viewpoints on managing data with the wrapper. The first is that all changes made to a data object in code must be persisted in the database by calling a "save" method to actually save the changes. The other side is that these changes should be save as they are made, so if I change a property it's saved, I change another it's save as well. What are the pros/cons of either choice and which is the "proper" way to manage the data?

    Read the article

  • secure offline PC storage accessible through javascript

    - by turbo2oh
    I'm attempting to build a browser-based HTML5 application that has the ability to store data locally on a PC (not mobile device) when offline. This data is sensitive and must be secure. Of course the trick is trying to find a way to be able to access the secure data with Javascript. I've ruled out browser local storage since its not secure. Could this be accomplished with a local database? If so, where could the DB credentials be stored? Javascript obviously doesn't seem like a good option to store them since its user-readable.

    Read the article

  • Are session aware Models a bad thing?

    - by kevtufc
    I'm thinking specifically in Rails here, but I suspect this is a wider question. In a Rails web application I'm using data from the session in models in order that the models know who is logged in. I use this in a method which filters out some data from the database depending on a very simple permissions system. The thing is: using sessions in models in Rails requires a bit of a workaround. It works, but I've a feeling that it's something that I shouldn't be doing and I'm worried there's a big gotcha I'm missing. I suppose the Right Thing To Do would be to return all the data and filter out the not-wanted bits in the controller before passing that to the view, but doing it in the model seems to avoid quite a bit of code duplication and so feels "cleaner." Can anyone tell me why or shouldn't do this? Or that it's not a problem?

    Read the article

  • Collisions and Lists

    - by user50635
    I've run into an issue that breaks my collisions. Here's my method: Gather Input Project Rectangle Check for intersection and ispassable Update The update method is built on object_position * seconds_passed * velocity * speed. Input changes velocity and is normalized if 1. This method works well with just one object comparison, however I pass a list or a for loop to the collision detector and velocity gets changed back to a non zero when the list hits an object that passes the test and the object can pass through. Any solutions would be much appreciated. Side note is there a more proper way to simulate movement?

    Read the article

  • Is backing up a MySQL database in GIT a good idea?

    - by wobbily_col
    I am trying to improve the backup situation for my application. I have a Django application and MySQL database. I read an article suggesting backing up the database in Git. On the one hand I like it, as it will keep a copy of the data and the code in synch. But GIT is a designed for code, not for data. As such it will be doing a lot of extra work diffing the mysql dump every commit, which is not really necessary. If I compress the file before storing it, will git diff the files? (The dump file is currently 100MB uncompressed, 5.7Mb when bzipped). Edit: the code and database schema definitions are already in GIT, it is really the data I am concerned about backing up now.

    Read the article

  • Generic IEqualityComparer

    - by Nettuce
    A generic equality comparer that takes a property expression or a comparison Func public class GenericComparer<T> : IEqualityComparer<T> where T : class         {             private readonly Func<T, T, bool> comparerExpression;             private readonly string propertyName;             public GenericComparer(Func<T, T, bool> comparerExpression)             {                 this.comparerExpression = comparerExpression;             }             public GenericComparer(Expression<Func<T, object>> propertyExpression)             {                 propertyName = (propertyExpression.Body is UnaryExpression ? (MemberExpression)((UnaryExpression)propertyExpression.Body).Operand : (MemberExpression)propertyExpression.Body).Member.Name;             }             public bool Equals(T x, T y)             {                 return comparerExpression == null ? x.GetType().GetProperty(propertyName).GetValue(x, null).Equals(y.GetType().GetProperty(propertyName).GetValue(y, null)) : comparerExpression.Invoke(x, y);             }             public int GetHashCode(T obj)             {                 return obj.ToString().GetHashCode();             }         }

    Read the article

  • comparing two tables [closed]

    - by sza
    I have two identical table ie all the columns are identical and one of the datatype is Text, one is varchar(255) and the rest are int. Lets say the table name is 'AAAAA'. Table AAAAA was processed and backed up earlier this month. Both the tables were storing data and now the second table is only storing data. I need to find unmatching records from the second table (BBBBB) which is storing data right now and add those records to Table AAAAA. Your help will be highly appreciated. I tried to use 'EXCEPT' but it does not support text datatype.

    Read the article

  • ASP.NET MVC Multilingual Web Application

    - by BobhatePradip
    We are going to see how we can show localized content to your ASP.NET MVC web application. We will see mainly two approaches- Approach 1: Using Static Pages We can go for this approach only when we have few/limited static localized pages. Approach 2: Using Dynamic page with localized data at runtime We should go for this approach if we have large number of pages to show a data in localized format. In this approach we can either use resource file or directly data from database.   For details about the this check this link http://www.codeproject.com/KB/aspnet/ASP_NET_MVC_Multilingual.aspx Here you can have code sample with explanation.

    Read the article

  • Websockets VS SSE

    - by user3385828
    Sorry for asking this here, I bet it has been asked plenty of times before but this time it's something specific which I haven't understood anywhere else: Suppose I have a service which requires to seek the database for different data once and in a while. For this I have 2 or 3 SSE, each one with a different retry basetime (20000 miliseconds, 1000 miliseconds...). What I'd like to know is if websockets can handle different "data type" accordingly to the request, for example, could I create one websocket to handle a notification system, a chat system, a group system instead of separated SSEs and treat data differently with javascript? And if so, would it be of higher interest (performance) than actually performing different queries to the server through different SSEs?

    Read the article

< Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >