Search Results

Search found 21392 results on 856 pages for 'order of operations'.

Page 109/856 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Windows 7 64 Bit - ODBC32 - Legacy App Problem

    - by Arturo Caballero
    Good day StackOverFlowlers, I´m a little stuck (really stuck) with an issue with a legacy application on my organization. I have a Windows 7 Enterprise 64 Bit machine, Access 2000 Installed and the Legacy App (Is built with something like VB but older) The App uses System ODBC in order to connect to a SQL 2000 DataBase on a Remote Server. I created the ODCB using C:\Windows\SysWOW64\odbcad32.exe app in order to create a System DSN. I did not use the Windows 7 because it is not visible to the Legacy App. I tested the ODBC connection with Access and worked ok, I can access the remote database. Then I run the legacy App as Administrator and the App can see the ODBC, but I´m getting errors on credential validation and I´m getting this error: DIAG [08001] [Microsoft][ODBC SQL Server Driver][Multi-Protocol]SQL Server does not exist or access denied. (17) DIAG [01000] [Microsoft][ODBC SQL Server Driver][Multi-Protocol]ConnectionOpen (Connect()). (53) DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed (0) I use Trusted Connection on the ODBC in order to validate the user by Domain Controller. I think that the credentials are not being sent by the Legacy App to the ODBC, or something like that. I don´t have the source code of the Legacy App in order to debug the connection. Also, I turned off the Firewall. Any ideas?? Thanks in advance!

    Read the article

  • Agile sysadmin and devops - How to accomplish?

    - by Marco Ramos
    Nowadays, agile systems adminitration and devops are some of the most trending topics regarding systems administration and operations. Both these concepts are mainly focused on bridging the gap between operations/sysadmins and the projects (developers, business, etc). Even if you never heard of the devops concept, I'm sure that this topic is your concern too. So, what tools and techniques do you use to accomplish devops in you companies? I'm particularly interested in topics like change management, continous integration and automatization, but not only. Please share your thoughts. I'm looking forward to read your answers/opinions :)

    Read the article

  • Processor speeds on my machine don't live up to manufacturer hype

    - by atch
    Why am I not seeing the promised speed claims of processor manufacturers on my computer? Producers of processors claim that their product can perform so many thousands (or millions) of operations per second. And yet on my machine (4GB, 3500hz), the typical program (Word, Visual Studio etc.) takes at least 10 seconds to start. I've formatted my hard drive and ticked all the necessary boxes to optimize my machine and yet I'm not seeing the promised speeds. Say it takes Outlook ten seconds to load. How many millions of operations does it really go through in order to start up?

    Read the article

  • Delivery of JMS message before the transaction is committed

    - by ewernli
    Hi, I have a very simple scenario involving a database and a JMS in an application server (Glassfish). The scenario is dead simple: 1. an EJB inserts a row in the database and sends a message. 2. when the message is delivered with an MDB, the row is read and updated. The problem is that sometimes the message is delivered before the insert has been committed in the database. This is actually understandable if we consider the 2 phase commit protocol: 1. prepare JMS 2. prepare database 3. commit JMS 4. ( tiny little gap where message can be delivered before insert has been committed) 5. commit database I've discussed this problem with others, but the answer was always: "Strange, it should work out of the box". My questions are then: How could it work out-of-the box? My scenario sounds fairly simple, why isn't there more people with similar troubles? Am I doing something wrong? Is there a way to solve this issue correctly? Here are a bit more details about my understanding of the problem: This timing issue exist only if the participant are treated in this order. If the 2PC treats the participants in the reverse order (database first then message broker) that should be fine. The problem was randomly happening but completely reproducible. I found no way to control the order of the participants in the distributed transactions in the JTA, JCA and JPA specifications neither in the Glassfish documentation. We could assume they will be enlisted in the distributed transaction according to the order when they are used, but with an ORM such as JPA, it's difficult to know when the data are flushed and when the database connection is really used. Any idea?

    Read the article

  • ActionView::TemplateError (integer 23656121084180 too big to convert to `unsigned int')

    - by jaycode
    Hi, this is the weirdest error I've ever got on Rails. Any idea what this may be is? NOTE: the error DOES NOT come from @order.get_invoice_number, I've tried to separate the code into multiple lines and it was clear the problem is within {:host... } ActionView::TemplateError (integer 23656121084180 too big to convert to `unsigned int') on line #56 of app/views/order_mailer/order_detail.text.html.erb: 53: <b>Order #:</b> 54: </td> 55: <td width="98%"> 56: <%= link_to "#{@order.get_invoice_number}", {:host => Thread.current[:host], :controller => 'store/account', :action => 'view_order', id => "#{@order.id}"}, {:target => '_blank'} %> 57: </td> 58: </tr> 59: <tr> app/views/order_mailer/order_detail.text.html.erb:56 app/controllers/store/ test_controller.rb:11:in `order_email'

    Read the article

  • Monitor the shell activity of a user on your Unix system?

    - by Joseph Turian
    Trust, but verify. Let's say I want to hire someone a sysadmin, and give them root access to my Unix system. I want to disable X windows for them, only allow shell usage (through SSH, maybe), so that all operations they perform will be through the shell (not mouse operations). I need a tool that will log to a remote server all commands they issue, as they issue them. So even if they install a back door and cover their tracks, that will be logged remotely. How do I disable everything but shell access? Is there a tool for instantaneously remotely logging commands as they are issued?

    Read the article

  • Windows 2008 R2 Server Core Disk Space Requirements/Recomendations

    - by Richard West
    I'm in the preparation stage to roll out a few Windows 2008 R2 Server Core in my VMware ESX environment. In looking over the documentation it looks like Server Core can operate in a little as 6.5 GB of hard drive space. Less disk space required. A Server Core installation requires only about 3.5 gigabytes (GB) of disk space to install and approximately 3 GB for operations after the installation. I am curious as to anyone’s real world experience and recommendations with regard to this requirement. Is it realistic? A little bit about our environment: Less than 25 users, and around 75 computers/servers in our current AD system. These systems will be responsible for normal AD operations and print servers for 5 printers - nothing to big here.

    Read the article

  • How to avoid Memory "Hard Fault/sec"

    - by Flavio Oliveira
    i've a problem on my windows 2008 server x64, and i cannot understand how can i solve it. i'm looking to Resource Monitor and see about 100 to 200 hard faults/sec. and generally the machine is slow. As i've readed a bit it is caused by a "memory Page" that is no longer available on physical memory and causes a io operations (disk) and it is a problem. The current hardware is a intel core2duo E8400 (3.0GHz) with 6GB RAM on a Windows Server Web 64-bit. Actually the machine have about 2GB Ram used what having 4Gb available to use, Why is the machine requires that high level of Disk operations? what can i do to increase the performance? Im experiencing a memory issues? what should be my starting point?

    Read the article

  • Error in asp.net c# code (mysql database connection)

    - by Ishan
    My code is to update a record if it already exists in database else insert as a new record. My code is as follows: protected void Button3_Click(object sender, EventArgs e) { OdbcConnection MyConnection = new OdbcConnection("Driver={MySQL ODBC 3.51 Driver};Server=localhost;Database=testcase;User=root;Password=root;Option=3;"); MyConnection.Open(); String MyString = "select fil_no,orderdate from temp_save where fil_no=? and orderdate=?"; OdbcCommand MyCmd = new OdbcCommand(MyString, MyConnection); MyCmd.Parameters.AddWithValue("", HiddenField4.Value); MyCmd.Parameters.AddWithValue("", TextBox3.Text); using (OdbcDataReader MyReader4 = MyCmd.ExecuteReader()) { //** if (MyReader4.Read()) { String MyString1 = "UPDATE temp_save SET order=? where fil_no=? AND orderdate=?"; OdbcCommand MyCmd1 = new OdbcCommand(MyString1, MyConnection); MyCmd1.Parameters.AddWithValue("", Editor1.Content.ToString()); MyCmd1.Parameters.AddWithValue("", HiddenField1.Value); MyCmd1.Parameters.AddWithValue("", TextBox3.Text); MyCmd1.ExecuteNonQuery(); } else { // set the SQL string String strSQL = "INSERT INTO temp_save (fil_no,order,orderdate) " + "VALUES (?,?,?)"; // Create the Command and set its properties OdbcCommand objCmd = new OdbcCommand(strSQL, MyConnection); objCmd.Parameters.AddWithValue("", HiddenField4.Value); objCmd.Parameters.AddWithValue("", Editor1.Content.ToString()); objCmd.Parameters.AddWithValue("", TextBox3.Text); // execute the command objCmd.ExecuteNonQuery(); } } } I am getting the error as: ERROR [42000] [MySQL][ODBC 3.51 Driver][mysqld-5.1.51-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'order,orderdate) VALUES ('04050040272009','&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&' at line 1 The datatype for fields in table temp_save are: fil_no-->INT(15)( to store a 15 digit number) order-->LONGTEXT(to store contents from HTMLEditor(ajax control)) orderdate-->DATE(to store date) Please help me to resolve my error.

    Read the article

  • Using Mod-Rewrite in XAMPP

    - by rrrfusco
    I've followed some tutorials on how to use Mod_Rewrite, but it's not working out. I have a php index page that takes a page parameter like so: call: index?page=name1, name2, name3 etc. <?php if (isset($_GET['page'])) { switch($_GET['page']) { case 'front': include "front.php"; break; default: break; } } ? I'd like to run mod-rewrite so that the urls display as site.com/name1. Is this possible with the code i'm using above? Below is what I've been trying in the apache config files to no avail. apache/conf/http.conf line 122: LoadModule rewrite_module modules/mod_rewrite.so line 188: DocumentRoot "G:/xampp/htdocs" line 198: #default <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> line 215: <Directory "G:/xampp/htdocs"> line 228: Options Indexes FollowSymLinks Includes ExecCGI line 235: AllowOverride All # cgi line 355: <Directory "G:/xampp/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> G:\xampp\apache\conf\extra\http.v-hosts.conf <VirtualHost *:80> DocumentRoot G:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost <Directory "G:/xampp/htdocs/localhost/"> Options Indexes FollowSymLinks AllowOverride FileInfo Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> DocumentRoot G:/xampp/htdocs/site2/ ServerName site2.localhost ServerAdmin [email protected] <Directory "G:/xampp/htdocs/site2.localhost/"> Options Indexes FollowSymLinks AllowOverride FileInfo Order allow,deny Allow from all </Directory> </VirtualHost> .htaccess file IndexIgnore * RewriteEngine on RewriteRule ^([^/\.]+)/?$ /index.php?page=$1 [L]

    Read the article

  • Binary Search Tree - Postorder logic

    - by daveb
    I am looking at implementing code to work out binary search tree. Before I do this I was wanting to verify my input data in postorder and preorder. I am having trouble working out what the following numbers would be in postorder and preorder I have the following numbers 4, 3, 14 ,8 ,1, 15, 9, 5, 13, 10, 2, 7, 6, 12, 11, that I am intending to put into an empty binary tree in that order. The order I arrived at for the numbers in POSTORDER is 2, 1, 6, 3, 7, 11, 12, 10, 9, 8, 13, 15, 14, 4. Have I got this right? I was wondering if anyone here would be able to kindly verify if the postorder sequence I came up with is indeed the correct sequence for my input i.e doing left subtree, right subtree and then root. The order I got for pre order (Visit root, do left subtree, do right subtree) is 4, 3, 1, 2, 5, 6, 14 , 8, 7, 9, 10, 12, 11, 15, 13. I can't be certain I got this right. Very grateful for any verification. Many Thanks

    Read the article

  • Business entity: private instance VS single instance

    - by taoufik
    Suppose my WinForms application has a business entity Order, the entity is used in multiple views, each view handles a different domain or use-case in the application. As an example, one managing orders, the other one digging into one order and displaying additional data. If I'd use nHibernate (or any other ORM) and use one session/dataContext per view (or per db action), I'd end up getting two different instances for the same Order (let's say orderId = 1). Although functionally the same entity, they are technically two different instances. Yes, I could implement Equals/GetHashcode to make them "seem" the same. Why would you go for a single instance per entity vs private instances per view or per use-case? Having single instances has the advantage of sharing INotifyPropertyChanged events, and sharing additional (non-persistent) data. Having a private instance in each view would give you the flexibility of the undo functionality on a view level. In the example above, I'd allow the user to change order details, and give them the flexibility to not save the change. Here, synchronisation between the view/use-case happens on a data persistence level. What would your argument be?

    Read the article

  • Problem storing a hash in DB using Storable::nfreeze in Perl

    - by Sam
    I want to insert a hash in the db using Storable::nfreeze but the data is not inserted properly. My code is as follows: %rec=(); $rec{'name'} = 'my name'; $rec{'address'} = 'my address'; my $order1 = new Order(); $order1->set_session(\%rec); $self->createOrder($order1); sub createOrder { my $self = $_[0]; my $order = $_[1]; # Retrieve the fields to insert into the database. my $st = $dbh->prepare("insert into order (session,.......) values(?,........)"); my $session = %{$order->get_session()}; $st->execute(&Storable::nfreeze(\%session),.....); $st->finish(); } sub getOrder { ... my $session = &Storable::thaw( $ref->{'session'} ); ..... } The thaw is working fine because I tested it withe some rows that have been inserted correctly, but when I try to get a row that was inserted using the createOrder subroutine, I get an error saying: Storable binary image v36.65 more recent than I am (v2.7) at blib/lib/Storable.pm (autosplit into blib/lib/auto/Storable/thaw.al) line 415 The error comes from the line that have thaw. The nfreeze did not store the hash properly. Can someone point me to what I'm doing wrong in the createOrder subroutine? I know the module version have nothing to do with the problem.

    Read the article

  • Not enough storage is available to process this command

    - by Mohit
    I am getting this error on almost all of the operations on a Windows 7 pro 32 bit machine. By operations I mean anything I do. Update a repo from subversion. Access a local IIS Site. Copy a big folder. Run an installer.and sometime if I try again. It get solved. I think there is something wrong wit windows7 . I searched around and found posts suggesting to increase IRPStackSize value in registry I did that no Luck. I am using Microsoft Security Essentials Version: 1.0.1961.0 as my antivirus package Once this errors starts popping up. I have to restart and then in after some random time. It starts showing up again. Any help is appreciated. I am losing lot of my time in restarting my system or retrying again and again.

    Read the article

  • ConcurrentLinkedQueue$Node remains in heap after remove()

    - by action8
    I have a multithreaded app writing and reading a ConcurrentLinkedQueue, which is conceptually used to back entries in a list/table. I originally used a ConcurrentHashMap for this, which worked well. A new requirement required tracking the order entries came in, so they could be removed in oldest first order, depending on some conditions. ConcurrentLinkedQueue appeared to be a good choice, and functionally it works well. A configurable amount of entries are held in memory, and when a new entry is offered when the limit is reached, the queue is searched in oldest-first order for one that can be removed. Certain entries are not to be removed by the system and wait for client interaction. What appears to be happening is I have an entry at the front of the queue that occurred, say 100K entries ago. The queue appears to have the limited number of configured entries (size() == 100), but when profiling, I found that there were ~100K ConcurrentLinkedQueue$Node objects in memory. This appears to be by design, just glancing at the source for ConcurrentLinkedQueue, a remove merely removes the reference to the object being stored but leaves the linked list in place for iteration. Finally my question: Is there a "better" lazy way to handle a collection of this nature? I love the speed of the ConcurrentLinkedQueue, I just cant afford the unbounded leak that appears to be possible in this case. If not, it seems like I'd have to create a second structure to track order and may have the same issues, plus a synchronization concern.

    Read the article

  • Improving field get and set performance with ASM or Javassist

    - by ng
    I would like to avoid reflection in an open source project I am developing. Here I have classes like the following. public class PurchaseOrder { @Property private Customer customer; @Property private String name; } I scan for the @Property annotation to determine what I can set and get from the PurchaseOrder reflectively. There are many such classes all using java.lang.reflect.Field.get() and java.lang.reflect.Field.set(). Ideally I would like to generate for each property an invoker like the following. public interface PropertyAccessor<S, V> { public void set(S source, V value); public V get(S source); } Now when I scan the class I can create a static inner class of PurchaseOrder like so. static class customer_Field implements PropertyAccessor<PurchaseOrder, Customer> { public void set(PurchaseOrder order, Customer customer) { order.customer = customer; } public Customer get(PurchaseOrder order) { return order.customer; } } With these I totally avoid the cost of reflection. I can now set and get from my instances with native performance. Can anyone tell me how I would do this. A code example would be great. I have searched the net for a good example but can find nothing like this. The ASM and Javasist examples are pretty poor also. The key here is that I have an interface that I can pass around. So I can have various implementations, perhaps one with Java Reflection as a default, one with ASM, and one with Javassist? Any help would be greatly appreciated.

    Read the article

  • Is there a way to communicate DBMS with raw memory block or binaries

    - by darkcminor
    I am trying to communicate a numerical matrix operations library like LAPACK with any DBMS. Is it possible to send/receive complete matrices as binary or as a direct memory pointers to process them (it will be something like: The Outside library processes data stored in DBMS, then it computes some huge matrix stuff and then via memory block or a binary DBMS get the result from library)? The main purpose is speed and avoid passing through a flat file, and last but not least, use library toefficiently do some operations DBMS are not designed to. * Is it possible that Oracle, SQL Server, MySQL support this technique?.

    Read the article

  • Correct password for ssh key rejected when ssh-d into machine

    - by user20342
    When I am logged into my machine directly, I can do all git operations, and when prompted for a password, the password is accepted. When I ssh into the same box and run git operations on the same repos, the password is rejected. Relevant section of .ssh/config looks like this: # Generic settings Host * ServerAliveInterval 600 ControlPath /tmp/ssh-%r@%h:%p ControlMaster auto KeepAlive yes IdentityFile ~/.ssh/id_rsa.pub Transaction looks like this when I login when I ssh into my box: {12-12-03 9:41}hbrown-wks2:~/workspace/spt/project@master??? hbrown% git pull Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Using bash does not appear to make a difference (i.e. ssh-agent /bin/bash). This is a recent development, but I can't cite the change that caused it.

    Read the article

  • Oracle command hangs when using view for "WHILE x IN..." subquery

    - by Calvin Fisher
    I'm working on a web service that fetches data from an oracle data source in chunks and passes it back to an indexing/search tool in XML format. I'm the C#/.NET guy, and am kind of fuzzy on parts of Oracle. Our Oracle team gave us the following script to run, and it works well: SELECT ROWID, [columns] FROM [table] WHERE ROWID IN ( SELECT ROWID FROM ( SELECT ROWID FROM [table] WHERE ROWID > '[previous_batch_last_rowid]' ORDER BY ROWID ) WHERE ROWNUM <= 10000 ) ORDER BY ROWID 10,000 rows is an arbitrary but reasonable chunk size and ROWID is sufficiently unique for our purposes to use as a UID since each indexing run hits only one table at a time. Bracketed values are filled in programmatically by the web service. Now we're going to start adding views to the indexing, each of which will union a few separate tables. Since ROWID would no longer function as a unique identifier, they added a column to the views (VIEW_UNIQUE_ID) that concatenates the ROWIDs from the component tables to construct a UID for each union. But this script does not work, even though it follows the same form as the previous one: SELECT VIEW_UNIQUE_ID, [columns] FROM [view] WHERE VIEW_UNIQUE_ID IN ( SELECT VIEW_UNIQUE_ID FROM ( SELECT VIEW_UNIQUE_ID FROM [view] WHERE ROWID > '[previous_batch_last_view_unique_id]' ORDER BY VIEW_UNIQUE_ID ) WHERE ROWNUM <= 10000 ) ORDER BY VIEW_UNIQUE_ID It hangs indefinitely with no response from the Oracle server. I've waited 20+ minutes and the SQLTools dialog box indicating a running query remains the same, with no progress or updates. I've tested each subquery independently and each works fine and takes a very short amount of time (<= 1 second), so the view itself is sound. But as soon as the inner two SELECT queries are added with "WHERE VIEW_UNIQUE_ID IN...", it hangs. Why doesn't this query work for views? In what important way are they not interchangeable here?

    Read the article

  • Good Starting Points for Optimizing Database Calls in Ruby on Rails?

    - by viatropos
    I have a menu in Rails which grabs a nested tree of Post models, each which have a Slug model associated via a polymorphic association (using the friendly_id gem for slugs and awesome_nested_set for the tree). The database output in development looks like this (here's the full gist): SQL (0.4ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 39) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 40 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 SQL (0.3ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 40) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 41 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 ... Rendered shared/_menu.html.haml (907.6ms) What are some quick things I should always do to optimize this from the start (easy things)? Some things I'm thinking now are: Can Rails 3 eager load the whole Post tree + associated Slugs in one DB call? Can I do that easily with named scopes or custom SQL? What is best practice in this situation? Not really thinking about memcached in this situation as that can be applied to much more than just this.

    Read the article

  • problem storing a hash in DB using Storage::nfreeze Perl

    - by Sam
    Hello, I want to insert a hash in the db using Storage::nfreeze but the data is not inserted properly. the code is as follow: %rec=(); $rec{'name'} = 'my name'; $rec{'address'} = 'my address'; my $order1 = new Order(); $order1->set_session(\%rec); $self->createOrder($order1); sub createOrder { my $self = $_[0]; my $order = $_[1]; # Retrieve the fields to insert into the database. my $st = $dbh->prepare("insert into order (session,.......) values(?,........)"); my $session = %{$order->get_session()}; $st->execute(&Storable::nfreeze(\%session),.....); $st->finish(); } sub getOrder { ... my $session = &Storable::thaw( $ref->{'session'} ); ..... } the thaw is working fine because I tested it withe some rows that have been inserted correctly. but when I try to get a row that was inserted using the createOrder subroutine, I get an error saying" Storable binary image v36.65 more recent than I am (v2.7) at blib/lib/Storable.pm (autosplit into blib/lib/auto/Storable/thaw.al) line 415 the error comes from the line that have thaw. the nfreeze did not store the hash properly. Can someone point me to what i m doing wrong in the createOrder subroutine? Thanks in advance. I know the module version have nothing to do with the problem.

    Read the article

  • postgres - ERROR: operator does not exist

    - by cino21122
    Again, I have a function that works fine locally, but moving it online yields a big fat error... Taking a cue from a response in which someone had pointed out the number of arguments I was passing wasn't accurate, I double-checked in this situation to be certain that I am passing 5 arguments to the function itself... Query failed: ERROR: operator does not exist: point <@> point HINT: No operator matches the given name and argument type(s). You may need to add explicit type casts. The query is this: BEGIN; SELECT zip_proximity_sum('zc', (SELECT g.lat FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT g.lon FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT m.zip FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1) ,10); The PG function is this: CREATE OR REPLACE FUNCTION zip_proximity_sum(refcursor, numeric, numeric, character, numeric) RETURNS refcursor AS $BODY$ BEGIN OPEN $1 FOR SELECT r.zip, point($2,$3) <@> point(g.lat, g.lon) AS distance FROM geocoded g LEFT JOIN masterfile r ON g.recordid = r.id WHERE (geo_distance( point($2,$3),point(g.lat,g.lon)) < $5) ORDER BY r.zip, distance; RETURN $1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100;

    Read the article

  • Improving field get and set performance with ASM

    - by ng
    I would like to avoid reflection in an open source project I am developing. Here I have classes like the following. public class PurchaseOrder { @Property private Customer customer; @Property private String name; } I scan for the @Property annotation to determine what I can set and get from the PurchaseOrder reflectively. There are many such classes all using java.lang.reflect.Field.get() and java.lang.reflect.Field.set(). Ideally I would like to generate for each property an invoker like the following. public interface PropertyAccessor<S, V> { public void set(S source, V value); public V get(S source); } Now when I scan the class I can create a static inner class of PurchaseOrder like so. static class customer_Field implements PropertyAccessor<PurchaseOrder, Customer> { public void set(PurchaseOrder order, Customer customer) { order.customer = customer; } public Customer get(PurchaseOrder order) { return order.customer; } } With these I totally avoid the cost of reflection. I can now set and get from my instances with native performance. Can anyone tell me how I would do this. A code example would be great. I have searched the net for a good example but can find nothing like this. The ASM examples are pretty poor also. The key here is that I have an interface that I can pass around. So I can have various implementations, perhaps one with Java Reflection as a default, one with ASM, and maybe one with Javassist? Any help would be greatly appreciated.

    Read the article

  • What are performance limits of a database?

    - by Tommy
    What are some rough performance limits (read/s, write/s) for a single database server (no master-slave architecture), assuming storage on disk? How many read/s, write/s, depending on the kind of disk? (SSD vs non-SSD) , assuming simple operations (select one row by primary key, update one row, correctly indexed). I assume this limit is dependent on disk seek/write. EDIT: My question is more about getting rough metrics of the number of operations a database supports: to be able to know for example, if a new feature triggering 300 inserts/s can be supported without scaling out with additional servers.

    Read the article

  • mod_rewrite with location-based ACL in apache?

    - by Alexey
    Hi. There is a CGI-script that provides some API for our customers. Call syntax is: script.cgi?module=<str>&func=<str>[&other-options] The task is to make different authentiction rules for different modules. Optionally, it will be great to have nice URLs. My config: <VirtualHost *:80> DocumentRoot /var/www/example ServerName example.com # Global policy is to deny all <Location /> Order deny,allow Deny from all </Location> # doesn't work :( <Location /api/foo> Order deny,allow Deny from all Allow from 127.0.0.1 </Location> RewriteEngine On # The only allowed type of requests: RewriteRule /api/(.+?)/(.+) /cgi-bin/api.cgi?module=$1&func=$2 [PT] # All others are forbidden: RewriteRule /(.*) - [F] RewriteLog /var/log/apache2/rewrite.log RewriteLogLevel 5 ScriptAlias /cgi-bin /var/www/example <Directory /var/www/example> Options -Indexes AddHandler cgi-script .cgi </Directory> </VirtualHost> Well, I know that problem is order of processing that directives. <Location>s will be processed after mod_rewrite has done its work. But I believe there is a way to change it. :) Using of standard Order deny,allow + Allow from <something> directives is preferable because it's commonly used in other places like this. Thank you for your attention. :)

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >