Search Results

Search found 21392 results on 856 pages for 'order of operations'.

Page 177/856 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • UML Class diagrams with Java packages?

    - by loosebruce
    I am trying to model in UML 2.0 a Java servlet application that has three classes Servlet class; essentially a main class that acts as the controller DatabaseLogic; contains methods for database operations XMLBuilder; builds an XML from a query result string The classes use a variety of packages from the Java library. I am unsure how to model this in UML Do I have to create a package and show which libraries are used for each individual class or can I just have one large package in the diagram with all the libraries showing which classes have dependencies on which. As per this diagram This is my first time working with java properly (im a C++ guy) Apart from being a bit messy , is this a correct UML representation of the system I described? Does a Package in UML mean the same as a Package in Java?

    Read the article

  • Kronos Workforce Mobile Apps (w/Java ME tech) lets bosses and staff work better

    - by hinkmond
    The Kronos Workforce Mobile apps let bosses spy on their workers, and let workers do what workers do best (uh, you know, work?), all using Java ME technology. See: Enable your Mobile Workforce w/Kronos Here's a quote: Kronos® Workforce Mobile™ Manager – allows managers to use their devices to monitor workforce operations, resolve exceptions, and respond quickly to employee requests. Kronos Workforce Mobile Employee – enables employees to track their work in real time, quickly and easily review information such as their schedules and timecards, and request time off. Kronos mobile applications are delivered as native applications for [blah-blah-blah]. A JavaME option is also available, which runs on a wide range of feature phones. Good stuff for the enterprise. Java ME technology helps run the mobile enterprise. I like that. Kinda catchy... Hinkmond

    Read the article

  • Sort by an object's type

    - by Richard Levasseur
    Hi all, I have code that statically registers (type, handler_function) pairs at module load time, resulting in a dict like this: HANDLERS = { str: HandleStr, int: HandleInt, ParentClass: HandleCustomParent, ChildClass: HandleCustomChild } def HandleObject(obj): for data_type in sorted(HANDLERS.keys(), ???): if isinstance(obj, data_type): HANDLERS[data_type](obj) Where ChildClass inherits from ParentClass. The problem is that, since its a dict, the order isn't defined - but how do I introspect type objects to figure out a sort key? The resulting order should be child classes follow by super classes (most specific types first). E.g. str comes before basestring, and ChildClass comes before ParentClass. If types are unrelated, it doesn't matter where they go relative to each other.

    Read the article

  • Storing credit card details

    - by Andrew
    I have a business requirement that forces me to store a customer's full credit card details (number, name, expiry date, CVV2) for a short period of time. Rationale: If a customer calls to order a product and their credit card is declined on the spot you are likely to lose the sale. If you take their details, thank them for the transaction and then find that the card is declined, you can phone them back and they are more likely to find another way of paying for the product. If the credit card is accepted you clear the details from the order. I cannot change this. The existing system stores the credit card details in clear text, and in the new system I am building to replace this I am clearly not going to replicate this! My question, then, is how I can securely store a credit card for a short period of time. I obviously want some kind of encryption, but what's the best way to do this? Environment: C#, WinForms, SQL-Server.

    Read the article

  • Oracle Unifies Oracle ATG Commerce and Oracle Endeca to Help Businesses Deliver Complete Cross-Channel Customer Experiences

    - by Jeri Kelley
    Today, Oracle announced Oracle Commerce, which unifies Oracle ATG Commerce and Oracle Endeca into one complete commerce solution. Oracle Commerce is designed to help businesses deliver consistent, relevant and personalized cross-channel customer experiences. “Oracle Commerce combines the best web commerce and customer experience solutions to enable businesses, whether B2C or B2B, to optimize the cross channel commerce experience,” said Ken Volpe, SVP, Product Development, Oracle Commerce. “Oracle Commerce demonstrates our focus on helping businesses leverage every aspect of its operations and technology investments to anticipate and exceed customer expectations.”Click here to learn more about this announcement.  

    Read the article

  • IntPtr in 32 Bit OS, UInt64 in 64 bit OS

    - by Ngu Soon Hui
    I'm trying to do an interop to a C++ structure from C#. The structure ( in C# wrapper) is something like this [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)] public struct SENSE4_CONTEXT { public System.IntPtr dwIndex; //or UInt64, depending on platform. } The underlying C++ structure is a bit abnormal. In 32 bit OS, dwIndex must be IntPtr in order for the interop to work, but in 64 bit OS, it must be UInt64 in order for the interop to work. Any idea how to modify the above structure to make it work on both 32 and 64 bit OS?

    Read the article

  • regex partial match for several different groups

    - by koral
    I need regexp to match string build from several groups (A is any letter, 9 is any digit): group 1 regex [A-Z]{1,2}[0-9]? A A9 AA9 group 2 regex [A-Z]{1,3}[0-9]? A AA AAA AAA9 group 3 regex [A-Z]{2,3}[0-9]?[A-Z]? AAA AA9 AA9A group 4 regex [0-9]{1,2}[A-Z]{1,2}[0-9]? 9A 9AA 9A9 99A9 Not each group must be present but there must be all in correct order - I mean (digit is group number): 1 12 123 1234 So if there is present group 3 there must me all preceding groups present also. As there are four groups (can be more), so alternative like ^[A-Z]{1,2}[0-9]{1}|[A-Z]{1,2}[0-9]{1}\s{1}[A-Z]{1}[0-9]?$ is not the best option as it would be complicated and difficult to maintain. Is there any solution with groups or something? The order of groups is important.

    Read the article

  • How can I pass a raw System.Drawing.Image to an .ashx?

    - by Mike C
    I am developing an application that stores images as Base64 strings in xml files. I also want to allow the user to crop the image before saving it to the file, preferably all in memory without having to save a temp file, and then delete it afterwards. In order to display the newly uploaded image, I need to create a HTTP handler that I can bind the asp:Image to. The only examples for doing this online require passing the .ashx an ID and then pulling the image from a DB or other data store. Is it possible to somehow pass the raw data to the .ashx in order to get back the image? Thanks, Mike

    Read the article

  • Get #part in URL with PHP/Symfony

    - by fesja
    Hi, I'm working with Symfony 1.2. I've a view with a list of objects. I can order them, filter them by category, or moving to the next page (there is pagination). Everything is done with AJAX, so I don't have to load all the page again. What I want to achieve is to have http://urltopage#page=1&order=title&cats=1,2 for example; so the new page is saved in the browser history, and he can paste it to another web. I haven't found a way to get the #part. I know that's only for the browser but I can't believe I can't get through PHP. I'm sure there is a simple solution I'm missing... thanks a lot!

    Read the article

  • Ouya / Android : button mapping bitwise

    - by scorvi
    I am programming a game with the Gameplay3d Engine. But the Android site has no gamepad support and that is what I need to port my game to Ouya. So I implemented a simple gamepad support and it supports 2 gamepads. So my problem is that I put the button stats in a float array for every gamepad. But the Gameplay3d engine saves their stats in a unsigned int _buttons variable. It is set with bitwise operations and I have no clue how to translate my array to this.

    Read the article

  • How to decide whether to implement an operation as Entity operation vs Service operation in Domain Driven Design?

    - by Louis Rhys
    I am reading Evans's Domain Driven Design. The book says that there are entity and there are services. If I were to implement an operation, how to decide whether I should add it as a method on an entity or do it in a service class? e.g. myEntity.DoStuff() or myService.DoStuffOn(myEntity)? Does it depend on whether other entities are involved? If it involves other entities, implement as service operation? But entities can have associations and can traverse it from there too right? Does it depend on stateless or not? But service can also access entities' variable, right? Like in do stuff myService.DoStuffOn, it can have code like if(myEntity.IsX) doSomething(); Which means that it will depend on the state? Or does it depend on complexity? How do you define complex operations?

    Read the article

  • SQL Server: export data via SQL query?

    - by rlb.usa
    I have FK and PK all over my db and table data needs to be specified in a certain order or else I get FK/PK insertion errors. I'm tired of executing the wizard again and again to transfer data one table at a time. In the SQL Server export data wizard there is an option to "Write a query to specify the data to transfer". I'd like to write the query myself and specify the correct order. Will this solve my problem? How do I do this? Can you provide a sample query (or link to one) The databases are on two different servers - SQL Server 2008 on each ; The database names & permissions are the same ; each table name & col is the same ; I need Identity Insert for each table.

    Read the article

  • blurry lines between web application context layer, service layer and data access layer in spring

    - by thenaglecode
    I Originally asked this question in SO but on advice I have moved the question here... I'll admit I'm a spring newbie, but you can correct me if I'm wrong, this one liner looks kinda fishy in a best practices sort of way: @RepositoryRestResource(collectionResourceRel="people"...) public interface PersonRepository extends PagingAndSortingRepository<Person, Long> For those who are unaware, the following does many things: It is an interface definition that can be registered in an application context as a jpa repository, automagically hooking up all the default CRUD operations within a persistence context (that is externally configured). and also configures default controller/request-mapping/handler functionality at the namespace "/people" relative to your configured dispatcher servlet-mapping. Here's my point. I just crossed 3 conceptual layers with one line of code! this feels against my seperation-of-concern instincts but i wanted to hear your opinion. And for the sake of being on a question and answer site, I would like to know whether there is a better way of seperating these different layers - Service, Data, Controllers - whilst maintaining as minimal configuration as possible

    Read the article

  • Increase the size of a memory mapped file

    - by sandun dhammika
    I am maintaning a memory mapped file to store my tree like datastructure. When I'm updating the datastructure ,I got this problem. The file is limited on it's size and can't be too long or too small. I have a methods like void mapfile_insert_record(RECORD* /* record*/); void mapfile_modify_record(RECORD* /* record*/); Both operations could lead to exceed the space which is free on memory file. How do I overcome this? What strategy I should use. calculate whether it requires to exceed the file as a pre-condition on both methods. Dynamically exceed it , for a example manage a timer and constantly polling file for it's free avaliable size and then automatically extend it. Any ideas or patterns to overcome this problem?

    Read the article

  • Binomial test in Python

    - by Morlock
    I need to do a binomial test in Python that allows calculation for 'n' numbers of the order of 10000. I have implemented a quick binomial_test function using scipy.misc.comb, however, it is pretty much limited around n = 1000, I guess because it reaches the biggest representable number while computing factorials or the combinatorial itself. Here is my function: from scipy.misc import comb def binomial_test(n, k): """Calculate binomial probability """ p = comb(n, k) * 0.5**k * 0.5**(n-k) return p How could I use a native python (or numpy, scipy...) function in order to calculate that binomial probability? If possible, I need scipy 0.7.2 compatible code. Many thanks!

    Read the article

  • regenerate the ID row in a MySQL Table with a Mysql's Script or PHP

    - by DomingoSL
    Hello, i have a database fill with information of the users who use my webpage. The table as many MySql tables have the ID parameters who is autoincrement. The issue is that when somebody eliminate his account from the site, in the database remain a jump in the sequence that i dont want cuz i have a script who fail if find some jump in the ID. Ex. ID Name PASS 1 Jhon 1234 2 Max 2233 3 Jorge 2232 If Max get out and a new user go in, this is what will happend. ID Name PASS 1 Jhon 1234 4 NewU 1133 3 Jorge 2232 So what is the best way to erase some body from the data base in order to avoid this isuue, or if is not a way, its posible to do a PHP or MySql script who eliminate all the contents in the ID row and regenerate it in order? Thanks A lot! sorry for my english

    Read the article

  • Windows in StreamInsight: Hopping vs. Snapshot

    - by Roman Schindlauer
    Three weeks ago, we explained the basic concept of windows in StreamInsight: defining sets of events that serve as arguments for set-based operations, like aggregations. Today, we want to discuss the so-called Hopping Windows and compare them with Snapshot Windows. We will compare these two, because they can serve similar purposes with different behaviors; we will discuss the remaining window type, Count Windows, another time. Hopping (and its syntactic-sugar-sister Tumbling) windows are probably the most straightforward windowing concept in StreamInsight. A hopping window is defined by its length, and the offset from one window to the next. They are aligned with some absolute point on the timeline (which can also be given as a parameter to the window) and create sets of events. The diagram below shows an example of a hopping window with length of 1h and hop size (the offset) of 15 minutes, hence creating overlapping windows:   Two aspects in this diagram are important: Since this window is overlapping, an event can fall into more than one windows. If an (interval) event spans a window boundary, its lifetime will be clipped to the window, before it is passed to the set-based operation. That’s the default and currently only available window input policy. (This should only concern you if you are using a time-sensitive user-defined aggregate or operator.) The set-based operation will be applied to each of these sets, yielding a result. This result is: A single scalar value in case of built-in or user-defined aggregates. A subset of the input payloads, in case of the TopK operator. Arbitrary events, when using a user-defined operator. The timestamps of the result are almost always the ones of the windows. Only the user-defined  operator can create new events with timestamps. (However, even these event lifetimes are subject to the window’s output policy, which is currently always to clip to the window end.) Let’s assume we were calculating the sum over some payload field: var result = from window in source.HoppingWindow( TimeSpan.FromHours(1), TimeSpan.FromMinutes(15), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { avg = window.Avg(e => e.Value) }; Now each window is reflected by one result event:   As you can see, the window definition defines the output frequency. No matter how many or few events we got from the input, this hopping window will produce one result every 15 minutes – except for those windows that do not contain any events at all, because StreamInsight window operations are empty-preserving (more about that another time). The “forced” output for every window can become a performance issue if you have a real-time query with many events in a wide group & apply – let me explain: imagine you have a lot of events that you group by and then aggregate within each group – classical streaming pattern. The hopping window produces a result in each group at exactly the same point in time for all groups, since the window boundaries are aligned with the timeline, not with the event timestamps. This means that the query output will become very bursty, delivering the results of all the groups at the same point in time. This becomes especially obvious if the events are long-lasting, spanning multiple windows each, so that the produced result events do not change their value very often. In such a case, a snapshot window can remedy. Snapshot windows are more difficult to explain than hopping windows: they represent those periods in time, when no event changes occur. In other words, if you mark all event start and and times on your timeline, then you are looking at all snapshot window boundaries:   If your events are never overlapping, the snapshot window will not make much sense. It is commonly used together with timestamp modification, which make it a very powerful tool. Or as Allan Mitchell expressed in in a recent tweet: “I used to look at SnapshotWindow() with disdain. Now she is my mistress, the one I turn to in times of trouble and need”. Let’s look at a simple example: I want to compute the average of some value in my events over the last minute. I don’t want this output be produced at fixed intervals, but at soon as it changes (that’s the true event-driven spirit!). The snapshot window will include all currently active event at each point in time, hence we need to extend our original events’ lifetimes into the future: Applying the Snapshot window on these events, it will appear to be “looking back into the past”: If you look at the result produced in this diagram, you can easily prove that, at each point in time, the current event value represents the average of all original input event within the last minute. Here is the LINQ representation of that query, applying the lifetime extension before the snapshot window: var result = from window in source .AlterEventDuration(e => TimeSpan.FromMinutes(1)) .SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { avg = window.Avg(e => e.Value) }; With more complex modifications of the event lifetimes you can achieve many more query patterns. For instance “running totals” by keeping the event start times, but snapping their end times to some fixed time, like the end of the day. Each snapshot then “sees” all events that have happened in the respective time period so far. Regards, The StreamInsight Team

    Read the article

  • If you are forced to use an Anemic domain model, where do you put your business logic and calculated

    - by Jess
    Our current O/RM tool does not really allow for rich domain models, so we are forced to utilize anemic (DTO) entities everywhere. This has worked fine, but I continue to struggle with where to put basic object-based business logic and calculated fields. Current layers: Presentation Service Repository Data/Entity Our repository layer has most of the basic fetch/validate/save logic, although the service layer does a lot of the more complex validation & saving (since save operations also do logging, checking of permissions, etc). The problem is where to put code like this: Decimal CalculateTotal(LineItemEntity li) { return li.Quantity * li.Price; } or Decimal CalculateOrderTotal(OrderEntity order) { Decimal orderTotal = 0; foreach (LineItemEntity li in order.LineItems) { orderTotal += CalculateTotal(li); } return orderTotal; } Any thoughts?

    Read the article

  • Accessing different connection strings at runtime in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve? UPDATE: I decided to abandon the modification of Web.config in lieu of a "request-scoped DataContext" pattern that I found here. From the looks of it, I believe it should give me the results I'm looking for. However, during the TextFixtureSetUp, I try to create a new copy of the database for testing purposes, and it fails silently. When I get to the tests, the repository still uses the production database connection string to load data.

    Read the article

  • Grouping with operands question

    - by Filip
    I have a table: mysql> desc kursy_bid; +-----------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+-------------+------+-----+---------+-------+ | datetime | datetime | NO | PRI | NULL | | | currency | varchar(6) | NO | PRI | NULL | | | value | varchar(10) | YES | | NULL | | +-----------+-------------+------+-----+---------+-------+ 3 rows in set (0.01 sec) I would like to select some rows from a table, grouped by some time interval (can be one day) where I will have the first row and the last row of the group, the max(value) and min(value). I tried: select datetime, (select value order by datetime asc limit 1) open, (select value order by datetime desc limit 1) close, max(value), min(value) from kursy_bid_test where datetime > '2009-09-14 00:00:00' and currency = 'eurpln' group by month(datetime), day(datetime), hour(datetime); but the output is: | open | close | datetime | max(value) | min(value) | +--------+--------+---------------------+------------+------------+ | 1.4581 | 1.4581 | 2009-09-14 00:00:05 | 4.1712 | 1.4581 | | 1.4581 | 1.4581 | 2009-09-14 01:00:01 | 1.4581 | 1.4581 | As you see open and close is the same (but they shouldn't be). What should be the query to do what I want?

    Read the article

  • Circular motion on low powered hardware

    - by Akroy
    I was thinking about platforms and enemies moving in circles in old 2D games, and I was wondering how that was done. I understand parametric equations, and it's trivial to use sin and cos to do it, but could an NES or SNES make real time trig calls? I admit heavy ignorance, but I thought those were expensive operations. Is there some clever way to calculate that motion more cheaply? I've been working on deriving an algorithm from trig sum identities that would only use precalculated trig, but that seems convoluted.

    Read the article

  • apache front end using mod_proxy_ajp to tomcat on different servers

    - by user302307
    Anyone knows the steps to run Apache on server A as front end and run mod_proxy_ajp to connect to tomcat instances on server B? I want to run apache on sever A to do name based vhost that connects to many tomcat servers. I can run mod_proxy_ajp, only if apache and tomcat are on the same server. What I've tried so far: In server A, running Apache 2.2: NameVirtualHost *:80 ServerName tc0.domo.lan ErrorLog "C:\Apache\Apache2.2\logs\tc0.ajp.error.log" CustomLog "C:\Apache\Apache2.2\logs\tc0.ajp.access.log" combined DocumentRoot C:/htdocs0 AddDefaultCharset Off Order deny,allow Allow from all ProxyPass / ajp://192.168.77.233:8009/ ProxyPassReverse / ajp://192.168.77.233:8009/ Options FollowSymLinks AllowOverride None Order deny,allow Allow from all Server B: 192.168.77.233, tomcat 6 connector: I can confirm if going to http://192.168.77.233:8080/manager/html, tomcat works. When I use packet sniffer on server A, I found that server A is trying to connect to server B at port 80 when I'm connecting http://tc0.domo.lan/manager/html on server A

    Read the article

  • SQL Server: collect values in an aggregation temporarily and reuse in the same query

    - by Erwin Brandstetter
    How do I accumulate values in t-SQL? AFAIK there is no ARRAY type. I want to reuse the values like demonstrated in this PostgreSQL example using array_agg(). SELECT a[1] || a[i] AS foo ,a[2] || a[5] AS bar -- assuming we have >= 5 rows for simplicity FROM ( SELECT array_agg(text_col ORDER BY text_col) AS a ,count(*)::int4 AS i FROM tbl WHERE id between 10 AND 100 ) x How would I best solve this with t-SQL? Best I could come up with are two CTE and subselects: ;WITH x AS ( SELECT row_number() OVER (ORDER BY name) AS rn ,name AS a FROM #t WHERE id between 10 AND 100 ), i AS ( SELECT count(*) AS i FROM x ) SELECT (SELECT a FROM x WHERE rn = 1) + (SELECT a FROM x WHERE rn = i) AS foo ,(SELECT a FROM x WHERE rn = 2) + (SELECT a FROM x WHERE rn = 5) AS bar FROM i Test setup: CREATE TABLE #t( id INT PRIMARY KEY ,name NVARCHAR(100)) INSERT INTO #t VALUES (3 , 'John') ,(5 , 'Mary') ,(8 , 'Michael') ,(13, 'Steve') ,(21, 'Jack') ,(34, 'Pete') ,(57, 'Ami') ,(88, 'Bob') Is there a simpler way?

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >