Search Results

Search found 3403 results on 137 pages for 'monthly statements'.

Page 39/137 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Index Tuning for SSIS tasks

    - by Raj More
    I am loading tables in my warehouse using SSIS. Since my SSIS is slow, it seemed like a great idea to build indexes on the tables. There are no primary keys (and therefore, foreign keys), indexes (clustered or otherwise), constraints, on this warehouse. In other words, it is 100% efficiency free. We are going to put indexes based on usage - by analyzing new queries and current query performance. So, instead of doing it our old fashioned sweat and grunt way of actually reading the SQL statements and execution plans, I thought I'd put the shiny new Database Engine Tuning Advisor to use. I turned SQL logging off in my SSIS package and ran a "Tuning" trace, saved it to a table and analyzed the output in the Tuning Advisor. Most of the lookups are done as: exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',1 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',2 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',3 exec sp_executesql N'SELECT [Active], [CompanyID], [CompanyName], [CompanyShortName], [CompanyTypeID], [HierarchyNodeID] FROM [dbo].[Company] WHERE ([CompanyID]=@P1) AND ([StartDateTime] IS NOT NULL AND [EndDateTime] IS NULL)',N'@P1 int',4 and when analyzed, these statements have the reason "Event does not reference any tables". Huh? Does it not see the FROM dbo.Company??!! What is going on here? So, I have multiple questions: How do I get it to capture the actual statement executing in my trace, not what was submitted in a batch? Are there any best practices to follow for tuning performance related to SSIS packages running against SQL Server 2008?

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • Use an Array to Cycle Through a MySQL Where Statement

    - by Ryan
    I'm trying to create a loop that will output multiple where statements for a MySQL query. Ultimately, my goal is to end up with these four separate Where statements: `fruit` = '1' AND `vegetables` = '1' `fruit` = '1' AND `vegetables` = '2' `fruit` = '2' AND `vegetables` = '1' `fruit` = '2' AND `vegetables` = '2' My theoretical code is pasted below: <?php $columnnames = array('fruit','vegetables'); $column1 = array('1','2'); $column2 = array('1','2'); $where = ''; $column1inc =0; $column2inc =0; while( $column1inc <= count($column1) ) { if( !empty( $where ) ) $where .= ' AND '; $where = "`".$columnnames[0]."` = "; $where .= "'".$column1[$column1inc]."'"; while( $column2inc <= count($column2) ) { if( !empty( $where ) ) $where .= ' AND '; $where .= "`".$columnnames[1]."` = "; $where .= "'".$column2[$column2inc]."'"; echo $where."\n"; $column2inc++; } $column1inc++; } ?> When I run this code, I get the following output: `fruit` = '1' AND `vegetables` = '1' `fruit` = '1' AND `vegetables` = '1' AND `vegetables` = '2' `fruit` = '1' AND `vegetables` = '1' AND `vegetables` = '2' AND `vegetables` = '' Does anyone see what I am doing incorrectly? Thanks.

    Read the article

  • Please help optimizing a long running query (left outer join, with 2 subqueries)

    - by 46and2
    Hi all. The query I need help with is: SELECT d.bn, d.4700, d.4500, ... , p.`Activity Description` FROM ( SELECT temp.bn, temp.4700, temp.4500, .... FROM `tdata` temp GROUP BY temp.bn HAVING (COUNT(temp.bn) = 1) ) d LEFT OUTER JOIN ( SELECT temp2.bn, max(temp2.FPE) AS max_fpe, temp2.`Activity Description` FROM `pdata` temp2 GROUP BY temp2.bn ) p ON p.bn = d.bn; The ... represents other fields that aren't really important to solving this problem. The issue is on the the second subquery - it is not using the index I have created and I am not sure why, it seems to be because of the way TEXT fields are handled. The first subquery uses the index I have created and runs quite snappy, however an explain on the second shows a 'Using temporary; Using filesort'. Please see the indexes I have created in the below table create statements. Can anyone help me optimize this? By way of quick explanation the first subquery is meant to only select records that have unique bn's, the second, while it looks a bit wacky (with the max function there which is not being used in the result set) is making sure that only one record from the right part of the join is included in the result set. My table create statements are CREATE TABLE `tdata` ( `BN` varchar(15) DEFAULT NULL, `4000` varchar(3) DEFAULT NULL, `5800` varchar(3) DEFAULT NULL, .... KEY `BN` (`BN`), KEY `idx_t3010`(`BN`,`4700`,`4500`,`4510`,`4520`,`4530`,`4570`,`4950`,`5000`,`5010`,`5020`,`5050`,`5060`,`5070`,`5100`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 CREATE TABLE `pdata` ( `BN` varchar(15) DEFAULT NULL, `FPE` datetime DEFAULT NULL, `Activity Description` text, .... KEY `BN` (`BN`), KEY `idx_programs_2009` (`BN`,`FPE`,`Activity Description`(100)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Thanks!

    Read the article

  • Cannot get new product attribute in grid display

    - by russjman
    I added a new attribute to my products(a boolean "yes/no" field). It is a variable to enable/disable the price from displaying on the product detail page, and grid view. I managed to get it work on the product info page. But on product grid page I cant seem to access that variable. Specifically, the template i am working with is catalog/product/price.phtml. From what i can tell, the price is being displayed by the same group of if-statements on both the product detail page, and grid page. This has me confused because i cant find any code on that template to handle multiple products, just a bunch of nested if statements. this is how im attempting to access this new variable using $_displayPrice. on line 36 of catalog/product/price.html <?php $_product = $this->getProduct(); ?> <?php $_id = $_product->getId() ?> <?php $_displayPrice = $_product->getDisplayPrice() ? "Yes" : "No"; echo $_displayPrice;?> What has me further confused is that when display $_product-getData(), my new variable isn't anywhere among that data. thanks in advance

    Read the article

  • What do these C# auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web in

    - by reckoner
    I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing. Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command. I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup. My question is: How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface? I have considered: Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language. Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document? Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious. Thanks for your help.

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • Logback: What would cause DEBUG - WARN to append to file but NOT ERROR?

    - by Zombies
    I am running a Java program from Eclipe, and I am using the logback console plugin. I can see the ERROR level statements being appended to console (as well as all of the others). But for some reason my file, which is correctly recieving new DEBUG-WARN statements, is NOT recieving the ERROR level ones. Here is my logback.xml: <consolePlugin /> <appender name="FILE" class="ch.qos.logback.core.FileAppender"> <file>logs/main.log</file> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</Pattern> </layout> </appender> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%msg%n</Pattern> </layout> </appender> <logger name="WebsiteChecker"> <appender-ref ref="FILE" /> </logger> <root> <level value="debug" /> <!--<appender-ref ref="STDOUT" />--> <appender-ref ref="FILE" /> </root> </configuration>

    Read the article

  • Files under Program Files have a split personality

    - by regularfry
    I have a Ruby application I'm installing (along with a packaged ruby interpreter) under Program Files on Windows 7 with an NSIS-built installer. In order to debug it, I edited one of the files to add some debugging statements. After that, I uninstalled the package and ran a new version of the installer which includes a new copy of the edited file, without debugging statements. Now, I can't get the new copy to load into ruby. If I run type <filename> in cmd.exe, or open the file in Notepad.exe or Firefox, I see the new version. If I run ruby -e "puts File.read('<filename>')", or open the file in emacs, I see the old version. If, in Windows Explorer, I copy the file to a new filename, everything can see the new contents at that filename. If I delete the original file and rename the copy to replace the original, the split personality returns. This situation survives a reboot, so it's not a simple matter of a file being accidentally held open. What on earth is going on here? Is there some aspect of the install process that might be checkpointing the file in a way I can revert, or at least switch off while I'm debugging the installer?

    Read the article

  • Checking multiple conditions in Ruby (within Rails, which may not matter)

    - by Ev
    Hello rubyists and railers, I have a method which checks over a params hash to make sure that it contains certain keys, and to make sure that certain values are set within a certain range. This is for an action that responds to a POST query by an iPhone app. Anyway, this method is checking for about 10 different conditions - any of which will result in an HTTP error being returned (I'm still considering this, but possibly a 400: bad request error). My current syntax is basically this (paraphrased): def invalid_submission_params?(params) [check one] or [check two] or [check three] or [check four] etc etc end Where each of the check statements returns true if that particular parameter check results in an invalid parameter set. I call it as a before filter with params[:submission] as the argument. This seems a little ugly (all the strung together or statements). Is there a better way? I have tried using case but can't see a way to make it more elegant. Or, perhaps, is there a rails method that lets me check the incoming params hash for certain conditions before handing control off to my action method?

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Mapping Vectors

    - by Dan Snyder
    Is there a good way to map vectors? Here's an example of what I mean: vec0 = [0,0,0,0,0,0,0,0,0,0,0] vec1 = [1,4,2,7,3,2] vec2 = [0,0,0,0,0,0,0,0,0] vec2 = [7,2,7,9,9,6,1,0,4] vec4 = [0,0,0,0,0,0] mainvec = [0,0,0,0,0,0,0,0,0,0,0,1,4,2,7,3,2,0,0,0,0,0,0,0,0,0,7,2,7,9,9,6,1,0,4,0,0,0,0,0,0] Lets say mainvec doesn't exist (I'm just showing it to you so you can see the general data structure in mind. Now say I want mainvec(12) which would be 4. Is there a good way to map the call of these vectors without just stitching them together into a mainvec? I realize I could make a bunch of if statements that test the index of mainvec and I can then offset each call depending on where the call is within one of the vectors, so for instance: mainvec(12) = vec1(1) which I could do by: mainvec(index) if (index >=13) vect1(index-11); I wonder if there's a concise way of doing this without if statements. Any Ideas?

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • Stored Procedure Does Not Fire Last Command

    - by jp2code
    On our SQL Server (Version 10.0.1600), I have a stored procedure that I wrote. It is not throwing any errors, and it is returning the correct values after making the insert in the database. However, the last command spSendEventNotificationEmail (which sends out email notifications) is not being run. I can run the spSendEventNotificationEmail script manually using the same data, and the notifications show up, so I know it works. Is there something wrong with how I call it in my stored procedure? [dbo].[spUpdateRequest](@packetID int, @statusID int output, @empID int, @mtf nVarChar(50)) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @id int SET @id=-1 -- Insert statements for procedure here SELECT A.ID, PacketID, StatusID INTO #act FROM Action A JOIN Request R ON (R.ID=A.RequestID) WHERE (PacketID=@packetID) AND (StatusID=@statusID) IF ((SELECT COUNT(ID) FROM #act)=0) BEGIN -- this statusID has not been entered. Continue SELECT ID, MTF INTO #req FROM Request WHERE PacketID=@packetID WHILE (0 < (SELECT COUNT(ID) FROM #req)) BEGIN SELECT TOP 1 @id=ID FROM #req INSERT INTO Action (RequestID, StatusID, EmpID, DateStamp) VALUES (@id, @statusID, @empID, GETDATE()) IF ((@mtf IS NOT NULL) AND (0 < LEN(RTRIM(@mtf)))) BEGIN UPDATE Request SET MTF=@mtf WHERE ID=@id END DELETE #req WHERE ID=@id END DROP TABLE #req SELECT @id=@@IDENTITY, @statusID=StatusID FROM Action SELECT TOP 1 @statusID=ID FROM Status WHERE (@statusID<ID) AND (-1 < Sequence) EXEC spSendEventNotificationEmail @packetID, @statusID, 'http:\\cpweb:8100\NextStep.aspx' END ELSE BEGIN SET @statusID = -1 END DROP TABLE #act END Idea of how the data tables are connected:

    Read the article

  • Unique prime factors using HashSet

    - by theGreenCabbage
    I wrote a method that recursively finds prime factors. Originally, the method simply printed values. I am currently trying to add them to a HashSet to find the unique prime factors. In each of my original print statements, I added a primes.add() in order to add that particular integer into my set. My printed output remains the same, for example, if I put in the integer 24, I get 2*2*2*3. However, as soon as I print the HashSet, the output is simply [2]. public static Set<Integer> primeFactors(int n) { Set<Integer> primes = new HashSet<Integer>(); if(n <= 1) { System.out.print(n); primes.add(n); } else { for(int factor = 2; factor <= n; factor++) { if(n % factor == 0) { System.out.print(factor); primes.add(factor); if(factor < n) { System.out.print('*'); primeFactors(n/factor); } return primes; } } } return primes; } I have tried debugging via putting print statements around every line, but was unable to figure out why my .add() was not adding some values into my HashSet.

    Read the article

  • INSERT stored procedure does not work?

    - by vikitor
    Hello, I'm trying to make an insertion from one database called suspension to the table called Notification in the ANimals database. My stored procedure is this: ALTER PROCEDURE [dbo].[spCreateNotification] -- Add the parameters for the stored procedure here @notRecID int, @notName nvarchar(50), @notRecStatus nvarchar(1), @notAdded smalldatetime, @notByWho int AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here INSERT INTO Animals.dbo.Notification values (@notRecID, @notName, @notRecStatus, null, @notAdded, @notByWho); END The null inserting is to replenish one column that otherwise will not be filled, I've tried different ways, like using also the names for the columns after the name of the table and then only indicate in values the fields I've got. I know it is not a problem of the stored procedure because I executed it from the sql server management studio and it works introducing the parameters. Then I guess the problem must be in the repository when I call the stored procedure: public void createNotification(Notification not) { try { DB.spCreateNotification(not.NotRecID, not.NotName, not.NotRecStatus, (DateTime)not.NotAdded, (int)not.NotByWho); } catch { return; } } It does not record the value in the database. I've been debugging and getting mad about this, because it works when I execute it manually, but not when I automatize the proccess in my application. Does anyone see anything wrong with my code? Thank you

    Read the article

  • NHibernate unintential lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_ NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • C/C++ Control Structure Limitations?

    - by STingRaySC
    I have heard of a limitation in VC++ (not sure which version) on the number of nested if statements (somewhere in the ballpark of 300). The code was of the form: if (a) ... else if (b) ... else if (c) ... ... I was surprised to find out there is a limit to this sort of thing, and that the limit is so small. I'm not looking for comments about coding practice and why to avoid this sort of thing altogether. Here's a list of things that I'd imagine could have some limitation: Number of functions in a scope (global, class, or namespace). Number of expressions in a single statement (e.g., compound conditionals). Number of cases in a switch. Number of parameters to a function. Number of classes in a single hierarchy (either inheritance or containment). What other control structures/language features have limits such as this? Do the language standards say anything about these limits (perhaps minimum requirements for an implementation)? Has anyone run into a particular language limitation like this with a particular compiler/implementation? EDIT: Please note that the above form of if statements is indeed "nested." It is equivalent to: if (a) { //... } else { if (b) { //... } else { if (c) { //... } else { //... } } }

    Read the article

  • Passing in a lambda to a Where statement

    - by sonicblis
    I noticed today that if I do this: var items = context.items.Where(i => i.Property < 2); items = items.Where(i => i.Property > 4); Once I access the items var, it executes only the first line as the data call and then does the second call in memory. However, if I do this: var items = context.items.Where(i => i.Property < 2).Where(i => i.Property > 4); I get only one expression executed against the context that includes both where statements. I have a host of variables that I want to use to build the expression for the linq lambda, but their presence or absence changes the expression such that I'd have to have a rediculous number of conditionals to satisfy all cases. I thought I could just add the Where() statements as in my first example above, but that doesn't end up in a single expression that contains all of the criteria. Therefore, I'm trying to create just the lambda itself as such: //bogus syntax if (var1 == "something") var expression = Expression<Func<item, bool>>(i => i.Property == "Something); if (var2 == "somethingElse") expression = expression.Where(i => i.Property2 == "SomethingElse"); And then pass that in to the where of my context.Items to evaluate. A) is this right, and B) if so, how do you do it?

    Read the article

  • Why is debugging better in an IDE?

    - by Bill Karwin
    I've been a software developer for over twenty years, programming in C, Perl, SQL, Java, PHP, JavaScript, and recently Python. I've never had a problem I could not debug using some careful thought, and well-placed debugging print statements. I respect that many people say that my techniques are primitive, and using a real debugger in an IDE is much better. Yet from my observation, IDE users don't appear to debug faster or more successfully than I can, using my stone knives and bear skins. I'm sincerely open to learning the right tools, I've just never been shown a compelling advantage to using visual debuggers. Moreover, I have never read a tutorial or book that showed how to debug effectively using an IDE, beyond the basics of how to set breakpoints and display the contents of variables. What am I missing? What makes IDE debugging tools so much more effective than thoughtful use of diagnostic print statements? Can you suggest resources (tutorials, books, screencasts) that show the finer techniques of IDE debugging? Sweet answers! Thanks much to everyone for taking the time. Very illuminating. I voted up many, and voted none down. Some notable points: Debuggers can help me do ad hoc inspection or alteration of variables, code, or any other aspect of the runtime environment, whereas manual debugging requires me to stop, edit, and re-execute the application (possibly requiring recompilation). Debuggers can attach to a running process or use a crash dump, whereas with manual debugging, "steps to reproduce" a defect are necessary. Debuggers can display complex data structures, multi-threaded environments, or full runtime stacks easily and in a more readable manner. Debuggers offer many ways to reduce the time and repetitive work to do almost any debugging tasks. Visual debuggers and console debuggers are both useful, and have many features in common. A visual debugger integrated into an IDE also gives you convenient access to smart editing and all the other features of the IDE, in a single integrated development environment (hence the name).

    Read the article

  • PHP5 getrusage() returning incorrect information?

    - by Andrew
    I'm trying to determine CPU usage of my PHP scripts. I just found this article which details how to find system and user CPU usage time (Section 4). However, when I tried out the examples, I received completely different results. The first example: sleep(3); $data = getrusage(); echo "User time: ". ($data['ru_utime.tv_sec'] + $data['ru_utime.tv_usec'] / 1000000); echo "System time: ". ($data['ru_stime.tv_sec'] + $data['ru_stime.tv_usec'] / 1000000); Results in: User time: 29.53 System time: 2.71 Example 2: for($i=0;$i<10000000;$i++) { } // Same echo statements Results: User time: 16.69 System time: 2.1 Example 3: $start = microtime(true); while(microtime(true) - $start < 3) { } // Same echo statements Results: User time: 34.94 System time: 3.14 Obviously, none of the information is correct except maybe the system time in the third example. So what am I doing wrong? I'd really like to be able to use this information, but it needs to be reliable. I'm using Ubuntu Server 8.04 LTS (32-bit) and this is the output of php -v: PHP 5.2.4-2ubuntu5.10 with Suhosin-Patch 0.9.6.2 (cli) (built: Jan 6 2010 22:01:14) Copyright (c) 1997-2007 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2007 Zend Technologies

    Read the article

  • Using a MockContext inside a Java package not an Android Package.

    - by jax
    I have moved most of my Android code into a separate Java package. I want to run some JUnit4 tests however I can't seem to get a MockContext working. I have extended MockContext() but have not done anything with it yet as I don't know what need to be done. At: private static MyMockContext context = new MyMockContext(); I get java.lang.ExceptionInInitializerError at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.lang.RuntimeException: Stub! at android.content.Context.<init>(Context.java:4) at android.test.mock.MockContext.<init>(MockContext.java:5) at com.example.zulu.MyMockContext.<init>(MyMockContext.java:34) at com.example.zulu.RoomCoreImplTest.<clinit>(RoomCoreImplTest.java:15) ... 16 more

    Read the article

  • PHP curl timing mismatch

    - by JonoB
    I am running a php script that: queries a local database to retrieve an amount executes a curl statement to update an external database with the above amount + x queries the local database again to insert a new row reflecting that the curl statement has been executed. One of the problems that I am having is that the curl statement takes 2-4 seconds to execute, so I have two different users from the same company running the same script at the same time, the execution time of the curl command can cause a mismatch in what should be updated in the external database. This is the because the curl statement has not yet returned from the first user...so the second user is working off incorrect figures. I am not sure of the best options here, but basically I need to prevent two or more curl statements being run at the same time. I thought of storing a value in the database that indicates that the curl statement is being executed at that time, and prevent any other curl statements being run until its completed. Once the first curl statement has been executed, then the database flag is updated and the next one can run. If this field is 'locked', then I could loop through the code and sleep for (5) seconds, and then check again if the flag has been reset. If after (3) loops, then reset the flag automatically (i've never seen the curl take longer than 5 seconds) and continue processing. Are there any other (more elegant) ways of approaching this?

    Read the article

  • Being pressured to GOTO the dark-side

    - by Dan McG
    We have a situation at work where developers working on a legacy (core) system are being pressured into using GOTO statements when adding new features into existing code that is already infected with spagetti code. Now, I understand there may be arguments for using 'just one little GOTO' instead of spending the time on refactoring to a more maintainable solution. The issue is, this isolated 'just one little GOTO' isn't so isolated. At least once every week or so there is a new 'one little GOTO' to add. This codebase is already a horror to work with due to code dating back to or before 1984 being riddled with GOTOs that would make many Pastafarians believe it was inspired by the Flying Spagetti Monster itself. Unfortunately the language this is written in doesn't have any ready made refactoring tools, so it makes it harder to push the 'Refactor to increase productivity later' because short-term wins are the only wins paid attention to here... Has anyone else experienced this issue whereby everybody agrees that we cannot be adding new GOTOs to jump 2000 lines to a random section, but continually have Anaylsts insist on doing it just this one time and having management approve it? tldr; How can one go about addressing the issue of developers being pressured (forced) to continually add GOTO statements (by add, I mean add to jump to random sections many lines away) because it 'gets that feature in quicker'? I'm beginning to fear we may loses valuable developers to the raptors over this...

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >