Search Results

Search found 7216 results on 289 pages for 'low cost'.

Page 85/289 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Zend php memory memory_limit

    - by RepDetec
    All, I am working on a Zend Framework based web application. We keep encountering out of memory errors on our dev server: Allowed memory size of XXXX bytes exhausted (tried YYYY... We keep increasing memory_limit in php.ini, but it is now up over 1000 megs. What is a normal memory_limit value? What are the usual suspects in php/Zend for running out of memory? We are using the Propel ORM. Thanks for all of the help! Update I cannot reproduce this error in my windows environment. If I set memory_limit low (say 16M), I get the same error, but the "tried to allocate" amount is always something reasonable. For example: (tried to allocate 13344 bytes) If I set the memory very low on the (Fedora 9) server (such as 16M), I get the same thing. consistent, reasonable out of memory errors. However, even when the memory limit is set very high on our server (128M, for example), maybe once a week, I will get an crazy huge memory error: (tried to allocate 1846026201 bytes). I don't know if that might shed any more light onto what is going on. We are using propel 1.5. It sounds like the actual release is going to come out later this month, but it doesn't look like anyone else is having this problem with it anyway. I don't know that Propel is the problem. We are using Zend Server with php 5.2 on the Linux box, and 5.3 locally. Any more ideas? I have a ticket out to get Xdebug installed on the Linux box. Thanks, -rep

    Read the article

  • rails + sheevaplug = rails home development server and more

    - by microspino
    Hello I'd like to build a "Rails Brick" using a Sheevaplug from Marvell (O.S. is Ubuntu out of the box but You can install other distributions on It). It will be a home server and a silent, low cost (99$) low energy development machine. I'd like to add rails RVM, lot of gems, git-based heroku like deployment, passenger + nginx. This way I could have a portable server with a complete development environment and maybe I could find a hosting company where I can co-locate a grid of this devices or I can sell It as a simple little server for 10 or less users offices, with some centralized rails services (I think to a CMS, a BLOG, a WIKI, calendar or whatever this little jewel could afford). The usb port could make It a print server too or a UMTS link to the web via HUAWEI like usb UMTS keys. Can you give me some hint about: Is this project a crazy-close-to-failure idea? Why? which gem would You include? which rails open source app would you suggest? I have already an Excito Bubba Server at home, I saw the TonidoPlug so It came up in my mind to build something similiar but Rails based (Bubba is PHP based, TonidoPlug I don't know but It does not seems a Rails thing).

    Read the article

  • Isolated storage misunderstand

    - by Costa
    Hi this is a discussion between me and me to understand isolated storage issue. can you help me to convince me about isolated storage!! This is a code written in windows form app (reader) that read the isolated storage of another win form app (writer) which is signed. where is the security if the reader can read the writer's file, I thought only signed code can access the file! If all .Net applications born equal and have all permissions to access Isolated storage, where is the security then? If I can install and run Exe from isolated storage, why I don't install a virus and run it, I am trusted to access this area. but the virus or what ever will not be trusted to access the rest of file system, it only can access the memory, and this is dangerous enough. I cannot see any difference between using app data folder to save the state and using isolated storage except a long nasty path!! I want to try give low trust to Reader code and retest, but they said "Isolated storage is actually created for giving low trusted application the right to save its state". Reader code: private void button1_Click(object sender, EventArgs e) { String path = @"C:\Documents and Settings\All Users\Application Data\IsolatedStorage\efv5cmbz.ewt\2ehuny0c.qvv\StrongName.5v3airc2lkv0onfrhsm2h3uiio35oarw\AssemFiles\toto12\ABC.txt"; StreamReader reader = new StreamReader(path); var test = reader.ReadLine(); reader.Close(); } Writer: private void button1_Click(object sender, EventArgs e) { IsolatedStorageFile isolatedFile = IsolatedStorageFile.GetMachineStoreForAssembly(); isolatedFile.CreateDirectory("toto12"); IsolatedStorageFileStream isolatedStorage = new IsolatedStorageFileStream(@"toto12\ABC.txt", System.IO.FileMode.Create, isolatedFile); StreamWriter writer = new StreamWriter(isolatedStorage); writer.WriteLine("Ana 2akol we ashrab kai a3eesh wa akbora"); writer.Close(); writer.Dispose(); }

    Read the article

  • Starting a new business with former colleagues, advice needed

    - by Sparafusile
    I was recently contacted by a former boss of mine who is thinking of starting a new business. The business will be online only it will by my job to design, build, and maintain the software going into it. I will also have to maintain the server it's running on, being the only technical person on the team. I will be one of four members of the business, the other three being the actual business know-hows and salesmen. The other three are shouldering the cost of getting the business going (incorporation, attorney fees, etc) and we will be splitting the cost of the server. I have no business knowledge at all, and don't want any part of it, I am only interested in the technical aspects. Now that we are finalizing our plans, determining roles, and getting ready to start actual work, we have come to the point where we have to determine what percentage stake each of us has. Since I have never done anything like this before, or met anyone that has, I don't know what I should expect. Can anyone give me some pointers on what to ask for in this deal?

    Read the article

  • Operator issues with cout

    - by BSchlinker
    I have a simple package class which is overloaded so I can output package data simply with cout << packagename. I also have two data types, name which is a string and shipping cost with a double. protected: string name; string address; double weight; double shippingcost; ostream &operator<<( ostream &output, const Package &package ) { output << "Package Information ---------------"; output << "Recipient: " << package.name << endl; output << "Shipping Cost (including any applicable fees): " << package.shippingcost; The problem is occurring with the 4th line (output << "Recipient:...). I'm receiving the error "no operator "<<" matches these operands". However, line 5 is fine. I'm guessing this has to do with the data type being a string for the package name. Any ideas?

    Read the article

  • Beginner python - stuck in a loop

    - by Jeremy
    I have two begininer programs, both using the 'while' function, one works correctly, and the other gets me stuck in a loop. The first program is this; num=54 bob = True print('The guess a number Game!') while bob == True: guess = int(input('What is your guess? ')) if guess==num: print('wow! You\'re awesome!') print('but don\'t worry, you still suck') bob = False elif guess>num: print('try a lower number') else: print('close, but too low') print('game over')`` and it gives the predictable output of; The guess a number Game! What is your guess? 12 close, but too low What is your guess? 56 try a lower number What is your guess? 54 wow! You're awesome! but don't worry, you still suck game over However, I also have this program, which doesn't work; #define vars a = int(input('Please insert a number: ')) b = int(input('Please insert a second number: ')) #try a function def func_tim(a,b): bob = True while bob == True: if a == b: print('nice and equal') bob = False elif b > a: print('b is picking on a!') else: print('a is picking on b!') #call a function func_tim(a,b) Which outputs; Please insert a number: 12 Please insert a second number: 14 b is picking on a! b is picking on a! b is picking on a! ...(repeat in a loop).... Can someone please let me know why these programs are different? Thank you!

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • Development environment for ASP.NET with EpiServer

    - by Binary255
    At our company we are going to develop more for the Windows platform than we have done up until now. As this scale of Windows development is new to us it would be nice with some feedback from experienced developers. Requirements we have: 5 developers from the beginning. 15 developers a year from now. All developers should be able to develop at the same time. Be able to develop solution for ASP.NET and EpiServer 5. Our idea: A shared server which developers use for development through Terminal Services. SQL Server Express. Start with some free express edition of Visual Studio, upgrade to a commercial version if we need the additional features. Use IIS and not the web server built into Visual Studio. Questions: Are we on the right track? In terms of license costs the above should be cheapest, right? What do you think about multiple developers doing development using a shared TS-server? Do you know of any company which has a similar development environment? Are we going to miss some features of the full Visual Studio version immediately? Is using Express version a bad choice? Is IIS the best choice? If use IIS the developers may use the same port for deployment. If we use the built in web server each one has to set their own port as we're sharing a machine. Comment answer: We are thinking about a shared server as it will most likely decrease the license costs. So it's purely a cost issue. We are using CVS for version control. Our situation is that we develop on Mac and Linux, that's why buying 1 server license + Visual Studio licenses seems to be a cost effective way of starting this type of development.

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • ActiveX not working properly with default security settings

    - by Ummar
    I have written an ActiveX control in C# and have made it working using regasm command, and it works fine as long as the security level is set to low.. Then as a next step I have made a .cab installer (ICD - Internet component downloader), and have signed my .cab file and ActiveX .dll file with a test certificate. when I hit the html page from my browser the installation parts works fine with default security settings of IE, but at the end it seems that nothing is installed and a red cross is shown on place of ActiveX. Moreover I have explored the Download Program Files folder under Windows directory, in status column it is showing word "unknown". while it is "installed" for all other activeX. what may be the problem. Moreover if i use the regasm command to register the assembly it works fine, and I have signed the ActiveX but still I have to move the security bar to low in my browser setting? why it is so? then what is the purpose of signing? I have used RegisterServer=yes in my .inf file Please let me know, if some one has gone through this problem already?

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • Delphi 6 OleServer.pas Invoke memory leak

    - by Mike Davis
    There's a bug in delphi 6 which you can find some reference online for when you import a tlb the order of the parameters in an event invocation is reversed. It is reversed once in the imported header and once in TServerEventDIspatch.Invoke. you can find more information about it here: http://cc.embarcadero.com/Item/16496 somewhat related to this issue there appears to be a memory leak in TServerEventDispatch.Invoke with a parameter of a Variant of type Var_Array (maybe others, but this is the more obvious one i could see). The invoke code copies the args into a VarArray to be passed to the event handler and then copies the VarArray back to the args after the call, relevant code pasted below: // Set our array to appropriate length SetLength(VarArray, ParamCount); // Copy over data for I := Low(VarArray) to High(VarArray) do VarArray[I] := OleVariant(TDispParams(Params).rgvarg^[I]); // Invoke Server proxy class if FServer <> nil then FServer.InvokeEvent(DispID, VarArray); // Copy data back for I := Low(VarArray) to High(VarArray) do OleVariant(TDispParams(Params).rgvarg^[I]) := VarArray[I]; // Clean array SetLength(VarArray, 0); There are some obvious work-arounds in my case: if i skip the copying back in case of a VarArray parameter it fixes the leak. to not change the functionality i thought i should copy the data in the array instead of the variant back to the params but that can get complicated since it can hold other variants and seems to me that would need to be done recursively. Since a change in OleServer will have a ripple effect i want to make sure my change here is strictly correct. can anyone shed some light on exactly why memory is being leaked here? I can't seem to look up the callstack any lower than TServerEventDIspatch.Invoke (why is that?) I imagine that in the process of copying the Variant holding the VarArray back to the param list it added a reference to the array thus not allowing it to be release as normal but that's just a rough guess and i can't track down the code to back it up. Maybe someone with a better understanding of all this could shed some light?

    Read the article

  • Why are my CATransitions acting up?

    - by Regan
    I am using the following code to switch between views with CATransition. CATransition *applicationLoadViewIn = [CATransition animation]; [applicationLoadViewIn setDuration:20]; [applicationLoadViewIn setType:kCATransitionPush]; [applicationLoadViewIn setSubtype:kCATransitionFromTop]; [applicationLoadViewIn setTimingFunction:[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseIn]]; ViewToSwitchTo *myviewcontroller = [[ViewToSwitchTo alloc] init]; [self.view.layer addAnimation:applicationLoadViewIn forKey:kCATransitionPush]; [self.view addSubview:myviewcontroller.view]; It functions mostly how I want it to. It pushes from the top like it should, however it for some reason acts strangely. First, the view I am switching to starts coming in from the bottom like it should, but for some reason, the view that I am switching FROM appears over the top of it with low opacity, so you see both of them. However, you also see the view that is coming in, shifted maybe 100 pixels upwards, on top of itself and the other view, once again with low opacity. Just before the halfway point of the the transition, everything works fine, you only see the view that is coming in and the view going out, doing what they should be doing. But slightly after the halfway point, the view I am switching to appears in its final destination, under the view I am switching from, and the view I am switching from has been reduced in opacity. What is going on here?

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

  • send message to a web service according to its schema

    - by hguser
    Hi: When I request a web servcie,it give me a response which show me the required parameters and its schema,for example: the response of the web service for the descriptin of the parameter Then I start to organize the next requset according to the parameter,for the parameter "bandWith" I set it as the following: <InputParameter parameterID="bandWidth"> <value> <commonData> <swe:Category> <swe:quality> <swe:Text> <swe:value>low</swe:value> </swe:Text> </swe:quality> </swe:Category> </commonData> </value> </InputParameter> However I got a exception : error information Also I tried the following format,things does not chage: <InputParameter parameterID="bandWidth"> <value> <commonData> <swe:Category> <swe:value>low</swe:value> </swe:Category> </commonData> </value> </InputParameter> So, I wonder how do define the parameter to match the format it defined? The schema can be found there: The schema

    Read the article

  • Java: Combine 2 List <String[]>

    - by battousai622
    I have two List of array string. I want to be able to create a New List (newList) by combining the 2 lists. But it must meet these 3 conditions: 1) Copy the contents of store_inventory into newList. 2) Then if the item names in store_inventory & new_acquisitions match, just add the two quantities together and change it in newList. 3) If new_acquisitions has a new item that does not exist in store_inventory, then add it to the newList. The titles for the CSV list are: Item Name, Quantity, Cost, Price. The List contains an string[] of item name, quantity, cost and price for each row. CSVReader from = new CSVReader(new FileReader("/test/new_acquisitions.csv")); List <String[]> acquisitions = from.readAll(); CSVReader to = new CSVReader(new FileReader("/test/store_inventory.csv")); List <String[]> inventory = to.readAll(); List <String[]> newList; Any code to get me started would be great! =] this is what i have so far... for (int i = 0; i < acquisitions.size(); i++) { temp1 = acquisitions.get(i); for (int j = 1; j < inventory.size(); j++) { temp2 = inventory.get(j); if (temp1[0].equals(temp2[0])) { //if match found... do something? //break out of loop } } //if new item found... do something? }

    Read the article

  • PostgreSQL - Error: SQL state: XX000.

    - by rob
    I have a table in Postgres that looks like this: CREATE TABLE "Population" ( "Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass), "Name" character varying(255) NOT NULL, "Description" character varying(1024), "IsVisible" boolean NOT NULL CONSTRAINT "pk_Population" PRIMARY KEY ("Id") ) WITH ( OIDS=FALSE ); And a select function that looks like this: CREATE OR REPLACE FUNCTION "Population_SelectAll"() RETURNS SETOF "Population" AS $BODY$select "Id", "Name", "Description", "IsVisible" from "Population"; $BODY$ LANGUAGE 'sql' STABLE COST 100 Calling the select function returns all the rows in the table as expected. I have a need to add a couple of columns to the table (both of which are foreign keys to other tables in the database). This gives me a new table def as follows: CREATE TABLE "Population" ( "Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass), "Name" character varying(255) NOT NULL, "Description" character varying(1024), "IsVisible" boolean NOT NULL, "DefaultSpeciesId" bigint NOT NULL, "DefaultEcotypeId" bigint NOT NULL, CONSTRAINT "pk_Population" PRIMARY KEY ("Id"), CONSTRAINT "fk_Population_DefaultEcotypeId" FOREIGN KEY ("DefaultEcotypeId") REFERENCES "Ecotype" ("Id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT "fk_Population_DefaultSpeciesId" FOREIGN KEY ("DefaultSpeciesId") REFERENCES "Species" ("Id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ) WITH ( OIDS=FALSE ); and function: CREATE OR REPLACE FUNCTION "Population_SelectAll"() RETURNS SETOF "Population" AS $BODY$select "Id", "Name", "Description", "IsVisible", "DefaultSpeciesId", "DefaultEcotypeId" from "Population"; $BODY$ LANGUAGE 'sql' STABLE COST 100 ROWS 1000; Calling the function after these changes results in the following error message: ERROR: could not find attribute 11 in subquery targetlist SQL state: XX000 What is causing this error and how do I fix it? I have tried to drop and recreate the columns and function - but the same error occurs. Platform is PostgreSQL 8.4 running on Windows Server. Thanks.

    Read the article

  • How to Avoid PHP Object Nesting/Creation Limit?

    - by Will Shaver
    I've got a handmade ORM in PHP that seems to be bumping up against an object limit and causing php to crash. Here's a simple script that will cause crashes: <? class Bob { protected $parent; public function Bob($parent) { $this->parent = $parent; } public function __toString() { if($this->parent) return (string) "x " . $this->parent; return "top"; } } $bobs = array(); for($i = 1; $i < 40000; $i++) { $bobs[] = new Bob($bobs[$1 -1]); } ?> Even running this from the command line will cause issues. Some boxes take more than 40,000 objects. I've tried it on Linux/Appache (fail) but my app runs on IIS/FastCGI. On FastCGI this causes the famous "The FastCGI process exited unexpectedly" error. Obviously 20k objects is a bit high, but it crashes with far fewer objects if they have data and nested complexity. Fast CGI isn't the issue - I've tried running it from the command line. I've tried setting the memory to something really high - 6,000MB and to something really low - 24MB. If I set it low enough I'll get the "allocated memory size xxx bytes exhausted" error. I'm thinking that it has to do with the number of functions that are called - some kind of nesting prevention. I didn't think that my ORM's nesting was that complicated but perhaps it is. I've got some pretty clear cases where if I load just ONE more object it dies, but loads in under 3 seconds if it works.

    Read the article

  • How can I work around WinXP using ports 1025-5000 as ephemeral?

    - by Chris Dolan
    If you create a TCP client socket with port 0 instead of a non-zero port, then the operating system chooses any free ephemeral port for you. Most OSes choose ephemeral ports from the IANA dynamic port range of 49152-65535. However in Windows Server 2003 and earlier (including XP) Microsoft used ports 1025-5000 as the ephemeral range, according to their bind() documentation. I run multiple Java services on the same hardware. On rare occasions, this range collides with well-known ports that I use for other services (e.g. port 4160 for Jini discovery). While rare, this has caused real problems. Is there any easy way to tell Windows or Java to use a different port range for client sockets? Microsoft's docs indicate that I can change the high end of that range via the MaxUserPort TcpIP registry setting, but I see no way to change the low end. Update: I've made some progress on this. It looks like Microsoft has a concept of reserved ports that are exceptions to the ephemeral port range. There's a registry setting that lets you change this permanently and apparently there must be an API to do the same thing because there's a data structure that holds high/low values for reserved port ranges, but I can't find the actual function call anywhere... The registry solution may work, but now I'm fixated on this API.

    Read the article

  • Concurrent WCF calls via shared channel

    - by Kent Boogaart
    I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled. But they are not being called concurrently. If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I wanted to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible. I found this article on MSDN that states: While channels and clients created by the channels are thread-safe, they might not support writing more than one message to the wire concurrently. If you are sending large messages, particularly if streaming, the send operation might block waiting for another send to complete. Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages. Can anyone shed some light on this?

    Read the article

  • Question about WeakHashMap

    - by michael
    Hi, In the Javadoc of "http://java.sun.com/j2se/1.4.2/docs/api/java/util/WeakHashMap.html", it said "Each key object in a WeakHashMap is stored indirectly as the referent of a weak reference. Therefore a key will automatically be removed only after the weak references to it, both inside and outside of the map, have been cleared by the garbage collector." And then Note that a value object may refer indirectly to its key via the WeakHashMap itself; that is, a value object may strongly refer to some other key object whose associated value object, in turn, strongly refers to the key of the first value object. But should not both Key and Value should be used weak reference in WeakHashMap? i.e. if there is low on memory, GC will free the memory held by the value object (since the value object most likely take up more memory than key object in most cases)? And if GC free the Value object, the Key Object can be free as well? Basically, I am looking for a HashMap which will reduce memory usage when there is low memory (GC collects the value and key objects if necessary). Is it possible in Java? Thank you.

    Read the article

  • DSP - Filter sweep effect

    - by Trap
    I'm implementing a 'filter sweep' effect (I don't know if it's called like that). What I do is basically create a low-pass filter and make it 'move' along a certain frequency range. To calculate the filter cut-off frequency at a given moment I use a user-provided linear function, which yields values between 0 and 1. My first attempt was to directly map the values returned by the linear function to the range of frequencies, as in cf = freqRange * lf(x). Although it worked ok it looked as if the sweep ran much faster when moving through low frequencies and then slowed down during its way to the high frequency zone. I'm not sure why is this but I guess it's something to do with human hearing perceiving changes in frequency in a non-linear manner. My next attempt was to move the filter's cut-off frequency in a logarithmic way. It works much better now but I still feel that the filter doesn't move at a constant perceived speed through the range of frequencies. How should I divide the frequency space to obtain a constant perceived sweep speed? Thanks in advance.

    Read the article

  • Ruby - calling constructor without arguments & removal of new line characters

    - by Raj
    I am a newbie at Ruby, I have written down a sample program. I dont understand the following: Why constructor without any arguments are not called in Ruby? How do we access the class variable outside the class' definition? Why does it always append newline characters at the end of the string? How do we strip it? Code: class Employee attr_reader :empid attr_writer :empid attr_writer :name def name return @name.upcase end attr_accessor :salary @@employeeCount = 0 def initiaze() @@employeeCount += 1 puts ("Initialize called!") end def getCount return @@employeeCount end end anEmp = Employee.new print ("Enter new employee name: ") anEmp.name = gets() print ("Enter #{anEmp.name}'s employee ID: ") anEmp.empid = gets() print ("Enter salary for #{anEmp.name}: ") anEmp.salary = gets() theEmpName = anEmp.name.split.join("\n") theEmpID = anEmp.empid.split.join("\n") theEmpSalary = anEmp.salary.split.join("\n") anEmp = Employee.new() anEmp = Employee.new() theCount = anEmp.getCount puts ("New employee #{theEmpName} with employee ID #{theEmpID} has been enrolled, welcome to hell! You have been paid as low as $ #{theEmpSalary}") puts ("Total number of employees created = #{theCount}") Output: Enter new employee name: Lionel Messi Enter LIONEL MESSI 's employee ID: 10 Enter salary for LIONEL MESSI : 10000000 New employee LIONEL MESSI with employee ID 10 has been enrolled, welcome to hell! You have been paid as low as $ 10000000 Total number of employees created = 0 Thanks

    Read the article

  • TFS query mixing Tasks and Bugs, sorted by Priority

    - by Val
    We're using TFS with MSF for Agile 4.2 on a project, and I have a bunch of work to do, both Tasks and Bugs. Both are prioritized by our managers, and assigned due dates and target releases. I use a Work Item query as my main TODO list, and I want to list all the Work Items assigned to me, in order by due date and priority. Problem: I can't seem to find a way to write a unified query that will list both Tasks and Bugs sorted by date and then priority. The problem is that Tasks and Bugs use different fields for Priority. So, my query currently lists the tasks by Due Date, then by Task Priority, then it lists Bugs by Due Date, then by Priority. So, I see tasks that are due later than bugs: Title Due Date Priority Task Priority task1 4/23/2010 Medium task2 4/23/2010 High task3 4/30/2010 Low task4 4/30/2010 Medium bug1 4/23/2010 1 bug2 4/23/2010 2 What I want: Title Due Date Priority Task Priority task1 4/23/2010 Medium task2 4/23/2010 High bug1 4/23/2010 1 bug2 4/23/2010 2 task3 4/30/2010 Low task4 4/30/2010 Medium I don't care if the bugs come before or after the tasks on the same due date; I just want all the work items grouped together by due date, so I never see Tasks for a later due date before Bugs for an earlier one. Another problem is the sorting on Task Priority -- alpha sort means I can't get them to sort by the meaning of the priority. But that's a minor problem I can live with if I can get the Tasks and Bugs intermingled. Any way to do this in a single query?

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >