Search Results

Search found 2134 results on 86 pages for 'numeric limits'.

Page 17/86 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Process limit for user in Linux

    - by BrainCore
    This is the standard question, "How do I set a process limit for a user account in Linux to prevent fork-bombing," with an additional twist. The running program originates as a root-owned Python process, which then setuids/setgids itself as a regular user. As far as I know, at this point, any limits set in /etc/security/limits.conf do not apply; the setuid-ed process may now fork bomb. Any ideas how to prevent this?

    Read the article

  • Should we increase local port range limit on busy memcached servers

    - by Majid Azimi
    nixcraft has a tutorial on configuring memcached server(link) at the end says: For busy memcached server you need to increase system file descriptor and IP port limits here is the code to do so: # Increase system IP port limits net.ipv4.ip_local_port_range = 2000 65000 why should we do this? memcached is a server and it will respond to clients with its listening port which is 11211 by default. So we shouldn't be limited by local port range.(net.ipv4.ip_local_port_range) The only limit is file descriptors. local port range should be a limit for servers like squid which generate local traffic

    Read the article

  • How to set ulimits for a service starting at boot?

    - by jayofdoom
    I need, for mysql to use large-pages, to set a ulimit - I've done this in limits.conf. However, limits.conf (pam_limits.so), doesn't get read in for init, only for "real" shells. I solved this before by adding a "ulimit -l" to the initscript start function. but I need some sort of repeatable way to do this, now that the boxes are managed with chef, and we don't want to take over a file that's actually owned by the RPM.

    Read the article

  • Can MYSQL filter by date if date is stored as text? ex "02/10/1984"

    - by Roeland
    Hello! I am trying to modify an app for a client which has already a database of over 1000 items. The dates are stored as text in the database with the format "02/10/1984". The system allows you to add and remove fields to the catalog dynamically and it also allows the advanced search to have specific fields be allowed. The problem is that it wasn't designed with dates in mind, so when I set a field as a date, and try to search by a range the query is trying to do a AND (cfv0.value = 01/02/2004 AND cfv0.value <= 05/03/2008) . I can make it so the date range passed is a numeric time value. Is there a way that when sending the query, it takes the text fields (with the date) and converts it to numeric time value so at that point I am basically just comparing numbers which would work fine. I do not have the option to change all the current date to numeric value due to the way the dynamic fields are set up. Thanks guys!

    Read the article

  • Extract XML name/value pairs from different nodes in Coldfusion

    - by Ryan French
    Hi All, I am working on some Plesk integration using the XML API and I am trying to figure out how to parse the XML response that I get back. Most of the data is fine, but the Limits and Permissions are setout differently. Essentially they are set out like so: <data> <limits> <limit> <name>foo</name> <value>bar</value> </limit> <limit> <name>foo2</name> <value>bar2</value> </limit> </limits> </data> How do I extract 'bar' from the xml given that I know I want the value of 'foo', but not the value of 'foo2'?

    Read the article

  • R - Specifying colClasses in the read.csv

    - by Derek
    Hi, I am trying to specify the colClasses options in the read.csv function in R. In my data, the first column "time" is basically a character vector while the rest of the columns are numeric. data<-read.csv("test.csv" , comment.char="" , colClasses=c(time="character","numeric") , strip.white=FALSE) In the above command, I would want R to read in the "time" column as "character" and the as numeric. Although, the "data" variable did have the correct result after the command completed, R returned the following warnings. I am wondering how I could fix these warnings? Warning messages: 1: In read.table(file = file, header = header, sep = sep, quote = quote, : not all columns named in 'colClasses' exist 2: In tmp[i[i > 0L]] <- colClasses : number of items to replace is not a multiple of replacement length Thank in advance Derek

    Read the article

  • DBCC CHECKDB WITH DATA_PURITY gives out of range error

    - by Mark Allison
    Hi there, I have restore a SQL Server 2000 database onto SQL Server 2005 and then run DBCC CHECKDB WITH DATA_PURITY and I get this error: Msg 2570, Level 16, State 3, Line 2 Page (1:19558), slot 13 in object ID 181575685, index ID 1, partition ID 293374720802816, alloc unit ID 11899744092160 (type "In-row data"). Column "NumberOfShares" value is out of range for data type "numeric". Update column to a legal value. The column NumberOfShares is a numeric (19,6) data type. If I run the following select max (NumberOfShares) from AUDIT_Table select min (NumberOfShares) from AUDIT_Table I get: 22678647.839110 -1845953000.000000 These values are inside the bounds of a numeric (19,6) so I'm not sure why the DBCC check fails. Any ideas to find out why it fails? Do I need to use DBCC PAGE? How would you troubleshoot this? Thanks, Mark.

    Read the article

  • Java - JPA - Generators - @SequenceGenerator

    - by Yatendra Goel
    I am learning JPA and have confusion in the @SequenceGenerator annotation. Upto my understanding, it automatically assigns a value to numeric identity fields/properties of an entity. Q1. Does this sequence generator make use of the database's increasing numeric value generating capability or generates the number on his own? Q2. If JPA uses database auto increement feauture, then will it work with datastores that don't have auto increement feature? Q3. If JPA generate numeric value on his own, then how the JPA implementation knows which value to generate next? Does it consult with the database first to see what value was stored last so as to generate the value (last + 1). ====================================================================================== Q4. Please also throw some light on sequenceName and allocationSize properties of @SequenceGenerator annotation.

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

  • Exporting to CSV with dynamic field type handling

    - by serhio
    I have to do an export from DB to CSV. field; fileld; field... etc Have 3 types of fields: Alpha, Numeric and Bool respresented as "alphaValue",intValue and True/False. I try to encapsulate this in a fields collection, in order to export if alpha then set "", if Bool=True/False if numeric let as is. and try to build a CsvField class: Public Structure?Class CsvField(Of T As ???) End Structure Enum FieldType Alpha Bool Numeric End Enum any suggestions welcomed.

    Read the article

  • user specific persistent cck types in drupal

    - by Fion
    Is there a way to do the following. We need to have a few basic cck types that will allow users to track their chosen parameters over a length of time. For example, one cck type may be called "numeric tracker" It would have a field for labeling the type and a field for entering a number. User A might label one numeric tracker "miles driven". Then each day user A would use this type to enter a number. User B might label a numeric tracker "hours slept". Each day user B would enter a number. Is there a way to use cck in this way?

    Read the article

  • Passing report values to a query

    - by Beavis
    I'm a novice with Microsoft Access as my background is mostly .NET. I'm sure what I'm trying to accomplish is dead simple but I need some direction. I have a report and a query. The query returns a single numeric value based on a single numeric criteria. Select total from table where id = [topic] I have placed a text box on my report so I can feed the id to this query and in return get the total. It seems like DLookUp is what I want but no matter how I construct it, I get an "#Error" in the text box when I run the report. Currently my DLookUp looks like this (I just hard-coded now for simplicity): =DLookUp("[total]","myquery","[topic] = 3") How can I pass a value from a field on my report to a query so I can return the query's single numeric value? Thanks.

    Read the article

  • Why do I have to set the max length of every damn text column in the database?

    - by John Leidegren
    Why is it that every RDBMS insists that you tell it what the max length of a text field is going to be... why can't it just infer this information form the data that's put into the database? I've mostly worked with MS SQL Server, but every other database I know also demands that you set these arbitrary limits on your data schema. The reality is that this is not particulay helpful or friendly to work with becuase the business requirements change all the time and almost every day some end-user is trying to put a lot of text into that column. Does any one with some inner working knowledge of a RDBMS know why we just don't infer the limits from the data that's put into the storage? I'm not talking about guessing the type information, but guessing the limits of a particular text column. I mean, there's a reason why I don't use nvarchar(max) on every text column in the database.

    Read the article

  • Details of 5GB and 50GB SQL Azure databases have now been released, along with new price points

    - by Eric Nelson
    Like many others signed up to the Windows Azure Platform, I received an email overnight detailing the upcoming database size changes for SQL Azure. I know from our work with early adopters over the last 12 months that the 1GB and 10GB limits were sometimes seen as blockers, especially when migrating existing application to SQL Azure. On June 28th 2010, we will be increasing the size limits: SQL Azure Web Edition database from 1 GB to 5 GB SQL Azure Business Edition database will go from 10 GB to 50 GB Along with these changes comes new price points, including the option to increase in increments of 10GB: Web Edition: Up to 1 GB relational database = $9.99 / month Up to 5 GB relational database = $49.95 / month Business Edition: Up to 10 GB relational database = $99.99 / month Up to 20 GB relational database = $199.98 / month Up to 30 GB relational database = $299.97 / month Up to 40 GB relational database = $399.96 / month Up to 50 GB relational database = $499.95 / month Check out the full SQL Azure pricing. Related Links: http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Fork bomb protection not working : Amount of processes not limited

    - by d_inevitable
    I just came to realize that my system is not limiting the amount of processes per user properly thus not preventing a user from dring a fork-bomb and crashing the entire system: user@thebe:~$ cat /etc/security/limits.conf | grep user user hard nproc 512 user@thebe:~$ ulimit -u 1024 user@thebe:~$ :(){ :|:& };: [1] 2559 user@thebe:~$ ht-bash: fork: Cannot allocate memory -bash: fork: Cannot allocate memory -bash: fork: Cannot allocate memory -bash: fork: Cannot allocate memory -bash: fork: Cannot allocate memory -bash: fork: Cannot allocate memory -bash: fork: Cannot allocate memory -bash: fork: Cannot allocate memory ... Connection to thebe closed by remote host. Is this a bug or why is it ignoring the limit in limits.conf and why is not applying the limit that ulimit -n claims it to be? PS: I really don't think the memory limit is hit before the process limit. This machine has 8GB ram and it was using only 4% of it at the time when I dropped the fork bomb.

    Read the article

  • SQL SERVER – Introduction to Function SIGN

    - by pinaldave
    Yesterday I received an email from a friend asking how do SIGN function works. Well SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. DECLARE @IntVal1 INT, @IntVal2 INT,@IntVal3 INT DECLARE @NumVal1 DECIMAL(4,2), @NumVal2 DECIMAL(4,2),@NumVal3 DECIMAL(4,2) SET @IntVal1 = 9; SET @IntVal2 = -9; SET @IntVal3 = 0; SET @NumVal1 = 9.0; SET @NumVal2 = -9.0; SET @NumVal3 = 0.0; SELECT SIGN(@IntVal1) IntVal1,SIGN(@IntVal2) IntVal2,SIGN(@IntVal3) IntVal3 SELECT SIGN(@NumVal1) NumVal1,SIGN(@NumVal2) NumVal2,SIGN(@NumVal2) NumVal3   The above function will give us following result set. You will notice that when there is positive value the function gives positive values and if the values are negative it will return you negative values. Also you will notice that if the data type is  INT the return value is INT and when the value passed to the function is Numeric the result also matches it. Not every datatype is compatible with this function.  Here is the quick look up of the return types. bigint -> bigint int/smallint/tinyint -> int money/smallmoney -> money numeric/decimal -> numeric/decimal everybody else -> float What will be the best example of the usage of this function that you will not have to use the CASE Statement. Here is example of CASE Statement usage and the same replaced with SIGN function. USE tempdb GO CREATE TABLE TestTable (Date1 SMALLDATETIME, Date2 SMALLDATETIME) INSERT INTO TestTable (Date1, Date2) SELECT '2012-06-22 16:15', '2012-06-20 16:15' UNION ALL SELECT '2012-06-24 16:15', '2012-06-22 16:15' UNION ALL SELECT '2012-06-22 16:15', '2012-06-22 16:15' GO -- Using Case Statement SELECT CASE WHEN DATEDIFF(d,Date1,Date2) > 0 THEN 1 WHEN DATEDIFF(d,Date1,Date2) < 0 THEN -1 ELSE 0 END AS Col FROM TestTable GO -- Using SIGN Function SELECT SIGN(DATEDIFF(d,Date1,Date2)) AS Col FROM TestTable GO DROP TABLE TestTable GO This was interesting blog post for me to write. Let me know your opinion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Problems installing clamcour

    - by user10778
    Hello People, when i try to install clamcour from terminal, it gives me this error, somebody can help me? calmcourdir# ./configure checking for libraries containing socket functions... -lc checking for socket... yes checking for bind... yes checking for listen... yes checking for accept... yes checking for shutdown... yes checking for socklen_t... yes checking for struct sockaddr_un.sun_len... no System log functions checking syslog.h usability... yes checking syslog.h presence... yes checking for syslog.h... yes checking for openlog... yes checking for syslog... yes checking for closelog... yes Time functions checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for localtime_r... yes checking for strftime... yes checking for unistd.h... (cached) yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking for sysconf... yes BZip2 support checking bzlib.h usability... no checking bzlib.h presence... no checking for bzlib.h... no checking for BZ2_bzWriteOpen in -lbz2... no GZip support checking zlib.h usability... no checking zlib.h presence... no checking for zlib.h... no checking for gzopen in -lz... no LibClamAV support checking for /usr/bin/clamav-config... no checking for /usr/local/bin/courier-config... no checking for /usr/clamav/bin/clamav-config... no checking for /usr/local/clamav/bin/clamav-config... no ./configure: line 25234: : command not found configure: error: Cannot find clamav-config

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Page Load Time - "Waiting on..." taking ages. What part of page request process is hung?

    - by James
    I have a new cluster site running on Magento that's on a development server that is made up of 2 x web servers and 1 x database server. I have optimized the site in all areas I know (gzip, increasing php memory limits, increasing database memory limits etc) but sometimes the page loading gets stuck on 'waiting for xxx.xx.xx.xxx' (Chrome and other broswers, chrome just shows it that way). It can sit there for 40 + seconds, sometimes it just never loads and I close it in frustration. What part of the page loading process is this hung at? Is it a server issue, database issue, platform issue? I need to know where to start or whether to push the hosting provider about it.

    Read the article

  • Problems building clamcour

    - by user10778
    Hello People, when i try to install clamcour from terminal, it gives me this error, somebody can help me? calmcourdir# ./configure checking for libraries containing socket functions... -lc checking for socket... yes checking for bind... yes checking for listen... yes checking for accept... yes checking for shutdown... yes checking for socklen_t... yes checking for struct sockaddr_un.sun_len... no System log functions checking syslog.h usability... yes checking syslog.h presence... yes checking for syslog.h... yes checking for openlog... yes checking for syslog... yes checking for closelog... yes Time functions checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for localtime_r... yes checking for strftime... yes checking for unistd.h... (cached) yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking for sysconf... yes BZip2 support checking bzlib.h usability... no checking bzlib.h presence... no checking for bzlib.h... no checking for BZ2_bzWriteOpen in -lbz2... no GZip support checking zlib.h usability... no checking zlib.h presence... no checking for zlib.h... no checking for gzopen in -lz... no LibClamAV support checking for /usr/bin/clamav-config... no checking for /usr/local/bin/courier-config... no checking for /usr/clamav/bin/clamav-config... no checking for /usr/local/clamav/bin/clamav-config... no ./configure: line 25234: : command not found configure: error: Cannot find clamav-config

    Read the article

  • Google Maps API: Premier License or excess map loads?

    - by j0nes
    I am currently looking for a way on how to deal with the Google Maps API usage limits. I am planning a redesign of our page that will probably get around 2 million map loads per month. This will surely break the usage limit of 750000 map loads per month available in the free version. If we pay for excess map loads, this means we would have to pay 5000$ per month. The other option would be to use a Premier license, however there is very few information available on the usage limits for this and the price. I have filled the request form to get a custom offer from Google, but I did not get any response yet. Can anyone of the Premier license holders tell me which option will be cheaper for my usage pattern, paying for Premier license or paying for excess map loads?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >