Search Results

Search found 6988 results on 280 pages for 'if else statement'.

Page 244/280 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • JDBC programms running long time performance issue

    - by phyerbarte
    My program has an issue with Oracle query performance, I believe the SQL have good performance, because it returns quickly in SQLPlus. But when my program has been running for a long time, like 1 week, the SQL query (using JDBC) becomes slower (In my logs, the query time is much longer than when I originally started the program). When I restart my program, the query performance comes back to normal. I think it is could be something wrong with the way I use the preparedStatement, because the SQL I'm using does not use placeholders "?" at all. Just a complex select query. The query process is done by a util class. Here is the pertinent code building the query: public List<String[]> query(String sql, String[] args) { Connection conn = null; conn = openConnection(); conn.setAutocommit(true); .... PreparedStatement preStatm = null; ResultSet rs = null; ....//set preparedstatment arg code rs = preStatm.executeQuery(); .... finally{ //close rs //close prestatm //close connection } } In my case, the args is always null, so it just passes a query sql to this query method. Is that possible this way could slow down the DB query after program long time running? Or I should use statement instead, or just pass args with "?" in the SQL? How can I find out the root cause for my issue? Thanks.

    Read the article

  • Best way to implement plugin framework - are DLLs the only way (C/C++ project)?

    - by Microkernel
    Introduction: I am currently developing a document classifier software in C/C++ and I will be using Naive-Bayesian model for classification. But I wanted the users to use any algorithm that they want(or I want in the future), hence I went to separate the algorithm part in the architecture as a plugin that will be attached to the main app @ app start-up. Hence any user can write his own algorithm as a plugin and use it with my app. Problem Statement: The way I am intending to develop this is to have each of the algorithms that user wants to use to be made into a DLL file and put into a specific directory. And at the start, my app will search for all the DLLs in that directory and load them. My Questions: (1) What if a malicious code is made as a DLL (and that will have same functions mandated by plugin framework) and put into my plugins directory? In that case, my app will think that its a plugin and picks it and calls its functions, so the malicious code can easily bring down my entire app down (In the worst case could make my app as a malicious code launcher!!!). (2) Is using DLLs the only way available to implement plugin design pattern? (Not only for the fear of malicious plugin, but its a generic question out of curiosity :) ) (3) I think a lot of softwares are written with plugin model for extendability, if so, how do they defend against such attacks? (4) In general what do you think about my decision to use plugin model for extendability (do you think I should look at any other alternatives?) Thank you -MicroKernel :)

    Read the article

  • What is the best way to do scoped finds based on access control rules in Rails?

    - by Rafael Szuminski
    Hi I need to find an elegant solution to scoped finds based on access control rules. Essentially I have the following setup: Users Customers AccessControl - Defines which user has access to another users data Users need to be able to access not just their own customers but also shared customers of other users. Obviously something like a simple association will not work: has_many :customers and neither will this: has_many :customers, :conditions => 'user_id in (1,2,3,4,5)' because the association uses with_scope and the added condition is an AND condition not an OR condition. I also tried overriding the find and method_missing methods with the association extension like this: has_many :customers do def find(*args) #get the user_id and retrieve access conditions based on the id #do a find based on the access conditions and passed args end def method_missing(*args) #get the user_id and retrieve access conditions based on the id #do a find based on the access conditions and passed args end end but the issue is that I don't have access to the user object / parent object inside the extension methods and it just does not work as planned. I also tried default_scope but as posted here before you can't pass a block to a default scope. Anyhow, I know that data segmentation and data access controls have been done before using rails and am wondering if somebody found an elegant way to do it. UPDATE: The AccessControl table has the following layout user_id shared_user_id The customer table has this structure: id account_id user_id first_name last_name Assuming the the following data would be in the AccessControl table: 1 1 1 3 1 4 2 2 2 13 and so on... And the account_id for user 1 is 13 I need to be able to retrieve customers that can be best described with the following sql statement: select * from customers where (account_id = 13 and user_id = null) or (user_id in (1,3,4))

    Read the article

  • LINQ to SQL Converter

    - by user609511
    How can I convert My LINQ to SQL ? i have this LINQ statement: int LimCol = Convert.ToInt32(LimitColis); result = oListTUP .GroupBy(x => new { x.Item1, x.Item2, x.Item3, x.Item4, x.Item5 }) .Select(g => new { Key = g.Key, Sum = g.Sum(x => x.Item6), Poids = g.Sum(x => x.Item7), }) .Select(p => new { Key = p.Key, Items = Enumerable.Repeat(LimCol, p.Sum / LimCol).Concat(Enumerable.Repeat(p.Sum % LimCol, p.Sum % LimCol > 0 ? 1 : 0)), CalculPoids = p.Poids / Enumerable.Repeat(LimCol, p.Sum / LimCol).Concat(Enumerable.Repeat(p.Sum % LimCol, p.Sum % LimCol > 0 ? 1 : 0)).Count() }) .SelectMany(p => p.Items.Select(i => Tuple.Create(p.Key.Item1, p.Key.Item2, p.Key.Item3, p.Key.Item4, p.Key.Item5, i, p.CalculPoids))) .ToList(); } It works well, but somehow want to push it and it become too complicated, so I want to convert it into Pure SQL. I have tried SQL Profiler and LinqPad, but neither shows me the SQL. How can I see the SQL code from My LINQ ? Thank you in advance.

    Read the article

  • Import CSV to class structure as the user defines

    - by Assimilater
    I have a contact manager program and I would like to offer the feature to import csv files. The problem is that different data sources order the fields in different ways. I thought of programming an interface for the user to tell it the field order and how to handle exceptions. Here is an example line in one of many possible field orders: "ID#","Name","Rank","Address1","Address2","City","State","Country","Zip","Phone#","Email","Join Date","Sponsor ID","Sponsor Name" "Z1234","Call, Anson","STU","1234 E. 6578 S.","","Somecity","TX","United States","012345","000-000-0000","[email protected]","5/24/2010","z12343","Quantum Independence" Notice that in one data field "Name" there is a comma to separate last name and first name and in another there is not. My plan is to have a line for each field (ie ID, Name, City etc.) and a statement "import to" and list box with options like: Don't Import, BusinessJoin Date, First Name, Zip and the program recognizes those as properties of an object... I'd also like the user to be able to record preset field orders so they can re-use them for csv files from the same download source. Then I also need it to check if a record all ready exists (is there a record for Anson Call all ready?) and allow the user to tell it what to do if there is a record (ie mailing address may have changes, so if that field is filled overwrite it, or this mailing address is invalid, leave the current data untouched for this person, overwrite the rest). While I'm capable of coding this...i'm not very excited about it and I'm wondering if there's a tool or set of tools out there to all ready perform most of this functionality... I hope this makes sense...

    Read the article

  • How to specify multiple values in where with AR query interface in rails3

    - by wkhatch
    Per section 2.2 of rails guide on Active Record query interface here: which seems to indicate that I can pass a string specifying the condition(s), then an array of values that should be substituted at some point while the arel is being built. So I've got a statement that generates my conditions string, which can be a varying number of attributes chained together with either AND or OR between them, and I pass in an array as the second arg to the where method, and I get: ActiveRecord::PreparedStatementInvalid: wrong number of bind variables (1 for 5) which leads me to believe I'm doing this incorrectly. However, I'm not finding anything on how to do it correctly. To restate the problem another way, I need to pass in a string to the where method such as "table.attribute = ? AND table.attribute1 = ? OR table.attribute1 = ?" with an unknown number of these conditions anded or ored together, and then pass something, what I thought would be an array as the second argument that would be used to substitute the values in the first argument conditions string. Is this the correct approach, or, I'm just missing some other huge concept somewhere and I'm coming at this all wrong? I'd think that somehow, this has to be possible, short of just generating a raw sql string.

    Read the article

  • Esper generating episodes

    - by Jasonojh
    I would like to use Esper to generate episodes of events. I am trying to detect the changes in robot movement during each time period and was wondering what would be the best way of implementation. The rules for generation of episodes from the events would be If the new event time(eg. 7sec, robot A) of a robot is more than 3 sec than the latest event(eg. 3 sec, robot A) of the same robot, the new event belongs to a new episode. Each episode should represent only one robot (eg. 2sec, robotA and 3sec, robotB should output 2 episodes) Input data: Event Time Robot Position 1 1 A 0 2 2 A 1 3 6 A 2 Output data should be: Array[0]={Event 1,Event 2} Array[1]={Event 3} //more than 3 sec Input data: Event Time Robot Position 1 1 A 0 2 2 A 1 3 4 B 0 4 6 A 2 Output data should be: Array[0]={Event 1,Event 2} Array[1]={Event 3} //different robot Array[2]={Event 4} Please help provide suggestions. I have tried using mulitple listeners, one for each robot, to create episodes and it works but I am trying to use a single EPL statement to do it. I have tried win:time_accum(3sec) group by robot but the second example output: Array[0]={Event 1,Event 2, Event 4} Array[1]={Event 3} as the time window is shifted everytime an event comes in, the system still thinks that event 4 is less than 3 sec due to event 3. how do I create a unique time window for each robot? Thank you for your suggestions and any help is greatly appreciated.

    Read the article

  • Most efficient way to LIMIT results in a JOIN?

    - by johnnietheblack
    I have a fairly simple one-to-many type join in a MySQL query. In this case, I'd like to LIMIT my results by the left table. For example, let's say I have an accounts table and a comments table, and I'd like to pull 100 rows from accounts and all the associated comments rows for each. Thy only way I can think to do this is with a sub-select in in the FROM clause instead of simply selecting FROM accounts. Here is my current idea: SELECT a.*, c.* FROM (SELECT * FROM accounts LIMIT 100) a LEFT JOIN `comments` c on c.account_id = a.id ORDER BY a.id However, whenever I need to do a sub-select of some sort, my intermediate level SQL knowledge feels like it's doing something wrong. Is there a more efficient, or faster, way to do this, or is this pretty good? By the way... This might be the absolute simplest way to do this, which I'm okay with as an answer. I'm simply trying to figure out if there IS another way to do this that could potentially compete with the above statement in terms of speed.

    Read the article

  • CREATE VIEW called multiple times not creating all views

    - by theninepoundhammer
    Noticing strange behavior in SQL 2005, both Express and Enterprise Edition: In my code I need to loop through a series of values (about five in a row), and for each value, I need to insert the value into a table and dynamically create a new view using that value as part of the where clause and the name of the view. The code runs pretty quickly, but what I'm noticing is that all the values are inserted into the table correctly but only the LAST view is being created. Every time. For example, if the values I'm using are X1, X2, X3, X4, and X5, I'll run the process, open up Mgmt Studio, and see five rows in the table with the correct five values, but only one view named MyView_x5 that has the correct WHERE clause. At first, I had this loop in an SSIS package as part of a larger data flow. When I started noticing this behavior, I created a stored proc that would create the CREATE VIEW statement dynamically after the insert and called EXECUTE to create the view. Same result. Finally, I created some C# code using the Enterprise Library DAAB, and did the insert and CREATE VIEW statements from my DLL. Same result every time. Most recently, I turned on Profiler while running against the Enterprise Edition and was able to verify that the Batch Started and Batch Completed events were being fired off for each instance of the view. However, like I said, only the last view is actually being created. Does anyone have any idea why this might be happening? Or any suggestions about what else to check or profile? I've profiled for error messages, exceptions, etc. but don't see any in my trace file. My express edition is 9.00.1399.06. Not sure about the Enterprise edition but think it is SP2.

    Read the article

  • Call Oracle package function using Odbc from C#

    - by Paolo Tedesco
    I have a function defined inside an Oracle package: CREATE OR REPLACE PACKAGE BODY TESTUSER.TESTPKG as FUNCTION testfunc(n IN NUMBER) RETURN NUMBER as begin return n + 1; end testfunc; end testpkg; / How can I call it from C# using Odbc? I tried the following: using System; using System.Data; using System.Data.Odbc; class Program { static void Main(string[] args) { using (OdbcConnection connection = new OdbcConnection("DSN=testdb;UID=testuser;PWD=testpwd")) { connection.Open(); OdbcCommand command = new OdbcCommand("TESTUSER.TESTPKG.testfunc", connection); command.CommandType = System.Data.CommandType.StoredProcedure; command.Parameters.Add("ret", OdbcType.Int).Direction = ParameterDirection.ReturnValue; command.Parameters.Add("n", OdbcType.Int).Direction = ParameterDirection.Input; command.Parameters["n"].Value = 42; command.ExecuteNonQuery(); Console.WriteLine(command.Parameters["ret"].Value); } } } But I get an exception saying "Invalid SQL Statement". What am I doing wrong?

    Read the article

  • How to query MYSQL when clicked?

    - by Sam
    Hello, I have a while statement, echoing my whole database that match a WHERE parameter. How can I make it so when I click on something (anything for the moment), it updates that specific row. Here's my code. while($request = mysql_fetch_array( $request_db )) { echo "<tr><td style=\"width:33%;padding:1px;\">"; echo $request['SongName']; echo "</td><td style=\"width:33%;\">"; echo $request['Artist']; echo "</td><td style=\"width:33%;\">"; echo $request['DedicatedTo']; echo "</td><td style=\"width:33%;\">"; echo "UPDATE A ROW's 'Hasplayed' value to '1'."; echo "</td></tr>"; } echo "</table>"; Thanks!

    Read the article

  • Application Code Redesign to reduce no. of Database Hits from Performance Perspective

    - by Rachel
    Scenario I want to parse a large CSV file and inserts data into the database, csv file has approximately 100K rows of data. Currently I am using fgetcsv to parse through the file row by row and insert data into Database and so right now I am hitting database for each line of data present in csv file so currently database hit count is 100K which is not good from performance point of view. Current Code: public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (:id, :code, :connectid, :connectcode)"; $stmt = $this->prepare($query); // Then, for each line : bind the parameters $stmt->bindValue(':id', $data[0], PDO::PARAM_INT); $stmt->bindValue(':code', $data[1], PDO::PARAM_INT); $stmt->bindValue(':connectid', $data[2], PDO::PARAM_INT); $stmt->bindValue(':connectcode', $data[3], PDO::PARAM_INT); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } } I am looking for a way wherein instead of hitting Database for every row of data, I can prepare the query and than hit it once and populate Database with the inserts. Any Suggestions !!! Note: This is the exact sample code that I am using but CSV file has more no. of field and not only id, code, connectid and connectcode but I wanted to make sure that I am able to explain the logic and so have used this sample code here. Thanks !!!

    Read the article

  • Setting WCF service for multiple client calls

    - by user348255
    Hi all, I have made a WCF service which is defined like this: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] binding is done using netTcpBinding. We support 50+ clients that call the server from time to time. Each client opens a channel using channelfactory once it is loaded and uses that channel for all calls (creates the channel and proxy only once). we have built a small load tester that imitates the client by calling the server by 50 different threads at once (using 50 different channels). when we run this tester, after the 10th client tries to connect, all other client fail connecting. We have set throttling to 100. My questions are: 1. is it correct for each client to create a channel and use it through the client life time? or, do i need to use a using statement for each call to the server (create and distroy a new channel for each call). 2. does the service have a limit of channel connections to it? other then throttling? thanks alot, Guy.

    Read the article

  • How do I insert null fields with Perl's DBD::Pg?

    - by User1
    I have a Perl script inserting data into Postgres according to a pipe delimited text file. Sometimes, a field is null (as expected). However, Perl makes this field into an empty string and the Postgres insert statement fails. Here's a snippet of code: use DBI; #Connect to the database. $dbh=DBI-connect('dbi:Pg:dbname=mydb','mydb','mydb',{AutoCommit=1,RaiseError=1,PrintError=1}); #Prepare an insert. $sth=$dbh-prepare("INSERT INTO mytable (field0,field1) SELECT ?,?"); while (<){ #Remove the whitespace chomp; #Parse the fields. @field=split(/\|/,$_); print "$_\n"; #Do the insert. $sth-execute($field[0],$field[1]); } And if the input is: a|1 b| c|3 EDIT: Use this input instead. a|1|x b||x c|3|x It will fail at b|. DBD::Pg::st execute failed: ERROR: invalid input syntax for integer: "" I just want it to insert a null on field1 instead. Any ideas? EDIT: I simplified the input at the last minute. The old input actually made it work for some reason. So now I changed the input to something that will make the program fail. Also note that field1 is a nullable integer datatype.

    Read the article

  • Should frontend and backend be handled by different controllers?

    - by DR
    In my previous learning projects I always used a single controller, but now I wonder if that is good practice or even always possible. In all RESTful Rails tutorials the controllers have a show, an edit and an index view. If an authorized user is logged on, the edit view becomes available and the index view shows additional data manipulation controls, like a delete button or a link to the edit view. Now I have a Rails application which falls exactly into this pattern, but the index view is not reusable: The normal user sees a flashy index page with lots of pictures, complex layout, no Javascript requirement, ... The Admin user index has a completly different minimalistic design, jQuery table and lots of additional data, ... Now I'm not sure how to handle this case. I can think of the following: Single controller, single view: The view is split into two large blocks/partials using an if statement. Single controller, two views: index and index_admin. Two different controllers: BookController and BookAdminController None of these solutions seems perfect, but for now I'm inclined to use the 3rd option. What's the preferred way to do this?

    Read the article

  • Is using a GUI worse than using bash and other text interface tools?

    - by Glycerine
    As a freelancer, and previous Adobe trainer, I get a look-in at many development workflows and alternate styles of programming and design and therefore quite open to different workflows. But a recent post I had, I needed to use SQL - so I whipped out navicat and wrote a nice join statement. I had sniggering comments and sideways glances when I told the developers I was working with navicat, I preferred a GUI to bash and I prefer Aptana to notepad. I felt a little insulted and under skilled as I didn't want to sit in front of reams of spewing green text. I know and use the tools when required but I prefer more attractive, modern products. Have you guys had this? What do you do to overcome it - apart from using bash and notepad more? How do you subdue a developer who's being arrogant about his skill? And is it my fault? I hope this question is not subjective, I do feel like I am inferior to my peers due to it - so some advise would really help.

    Read the article

  • Approach for parsing file and creating dynamic data structure for use by another program

    - by user275633
    All, Background: I have a customer who has some build scripts for their datacenter based on python that I've inherited. I did not work on the original design so I'm sort of limited to some degree on what I can and can't change. That said, my customer has a properties file that they use in their datacenter. Some of the values are used to build their servers and unfortunately they have other applications that also use these values so I cannot change them to make it easier for me. What I want to do is make the scripts more dynamic to distribute more hosts so that I don't have to keep updating the scripts in the future and can just add more hosts to the property file. Unfortunately I can't change the current property file and have to work with it. The property file looks something like this: projectName.ClusterNameServer1.sslport=443 projectName.ClusterNameServer1.port=80 projectName.ClusterNameServer1.host=myHostA projectName.ClusterNameServer2.sslport=443 projectName.ClusterNameServer2.port=80 projectName.ClusterNameServer2.host=myHostB In their deployment scripts they basically have alot of if projectName.ClusterNameServerX where X is some number of entries defined and then do something, e.g.: if projectName.ClusterNameServer1.host != "" do X if projectName.ClusterNameServer2.host != "" do X if projectName.ClusterNameServer3.host != "" do X Then when they add another host (say Serve4) they've added another if statement. Question: What I would like to do is make the scripts more dynamic and parse the properties file and put what I need into some data structure to pass to the deployment scripts and then just iterate over the structure and do my deployment that way so I don't have to constantly add a bunch of if some host# do something. I'm just curious to feed some suggestions as to what others would do to parse the file and what sort of data structure would they use and how they would group things together by ClusterNameServer# or something else. Thanks

    Read the article

  • SQL Group by Minute- Expanded

    - by Barnie
    I am working on something similar to this post here: TS SQL - group by minute However mine is pulling from an message queue, and I need to see an accurate count of the amount of traffic the Message Queue is creating/ sending, and at what time Select * From MessageQueue mq My expanded version of this though is the following: A) User defines a start time and an end time (Easy enough using Declare's @StartTime and @EndTime B) Give the user the option of choosing the "grouping". Will it be broken out by 1 minutes, 5 minutes, 15 minutes, or 30 minutes (Max). (I had thought to do this with a CASE statement, but my test problems fall apart on me.) C) Display the data to accurately show a count of what happened during the interval (Grouping) selected. This is where I am at so far SQL Blob: DECLARE @StartTime datetime DECLARE @EndTime datetime SELECT DATEPART(n, mq.cre_date)/5 as Time --Trying to just sort by 5 minute intervals ,CONVERT(VARCHAR(10),mq.Cre_Date,101) ,COUNT(*) as results FROM dbo.MessageQueue mq WHERE mq.cre_date BETWEEN @StartDate AND @EndDate GROUP BY DATEPART(n, mq.cre_date)/5 --Trying to just sort by 5 minute intervals , eq.Cre_Date This is the output I would like to achieve: [Time] [Date] [Message Count] 1300 06/26/2012 5 1305 06/26/2012 1 1310 06/26/2012 100

    Read the article

  • Chache problem running two consecutive HTTP GET requests from an APP1 to an APP2

    - by user502052
    I use Ruby on Rails 3 and I have 2 applications (APP1 and APP2) working on two subdomains: app1.domain.local app2.domain.local and I am tryng to run two consecutive HTTP GET requests from APP1 to APP2 like this: Code in APP1 (request): response1 = Net::HTTP.get( URI.parse("http://app2.domain.local?test=first&id=1") ) response2 = Net::HTTP.get( URI.parse("http://app2.domain.local/test=second&id=1") ) Code in APP2 (response): respond_to do |format| if <model_name>.find(params[:id]).<field_name> == "first" <model_name>.find(params[:id]).update_attribute ( <field_name>, <field_value> ) format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } elsif <model_name>.find(params[:id]).<field_name> == "second" format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } end end After the first request I get the correct XML (response1 is what I expect), but on the second it isn't (response2 isn't what I expect). Doing some tests I found that the second time that <model_name>.find(params[:id]).<field_name> run (for the elsif statements) it returns always a blank value so that the code in the elseif statement is never run. Is it possible that the problem is related on caching <model_name>.find(params[:id]).<field_name>? P.S.: I read about eTag and Conditional GET, but I am not sure that I must use that approach. I would like to keep all simple.

    Read the article

  • In jQuery datepicker beforeShowDay if fails

    - by michalzuber
    HI. I'm having problem to highlight more days. Somehow the if statement in beforeShowDay isn't true when it should be :( dateString typeof is string and an example is available in the screenshot at http://mikaelz.host.sk/datepicker.png Pasted code: var dates = new Array(); function addDate(date) {if (jQuery.inArray(date, dates) < 0) dates.push(date);} function removeDate(index) {dates.splice(index, 1);} function addOrRemoveDate(date) { var index = jQuery.inArray(date, dates); if (index >= 0) removeDate(index); else addDate(date); } function padNumber(number) { var ret = new String(number); if (ret.length == 1) ret = "0" + ret; return ret; } jQuery(document).ready(function(){ jQuery("#pick-date").datepicker({ onSelect: function(dateText, inst) { addOrRemoveDate(dateText); }, beforeShowDay: function (date){ var year = date.getFullYear(); var month = padNumber(date.getMonth()); var day = padNumber(date.getDate()); var dateString = day + "." + month + "." + year; if (dates.indexOf(dateString) >= 0) { return [true,"ui-state-highlight"]; } return [true, ""]; } }); }); Big thanks for any good ideas what could be wrong.

    Read the article

  • PHP Multiple navigation highlights

    - by Blackbird
    I'm using this piece of code below to highlight the "active" menu item on my global navigation. <?php $path = $_SERVER['PHP_SELF']; $page = basename($path); $page = basename($path, '.php'); ?> <ul id="nav"> <li class="home"><a href="index.php">Home</a></li> <li <?php if ($page == 'search') { ?>class="active"<?php } ?>><a href="#">Search</a></li> <li <?php if ($page == 'help') { ?>class="active"<?php } ?>><a href="help.php">Help</a></li> </ul> All works great but, on a couple of pages, I have a second sub menu (sidebar menu) within a global page. I basically need to add a OR type statement into the php somehow, but I haven't a clue a how. Example of what i mean: <li <?php if ($page == 'help') OR ($page == 'help-overview') OR ($page == 'help-cat-1') { ?>class="active"<?php } ?>><a href="#">Search</a></li> Thanks!

    Read the article

  • Should I use early returns in C#?

    - by Bobby
    I've learned Visual Basic and was always taught to keep the flow of the program without interruptions, like Goto, Exit and Return. Using nested ifs instead of one return statement seems very natural to me. Now that I'm partly migrating towards C#, I wonder what the best practice is for C-like languages. I've been working on a C# project for some time, and of course discover more code of ExampleB and it's hurting my mind somehow. But what is the best practice for this, what is more often used and are there any reasons against one of the styles? public void ExampleA() { if (a != b) { if (a != c) { bool foundIt; for (int i = 0; i < d.Count && !foundIt; i++) { if (element == f) foundIt = true; } } } } public void ExampleB() { if (a == b) return; if (a == c) return; foreach (object element in d) { if (element == f) break; } }

    Read the article

  • C# call a C++ dll get EntryPointNotFoundException

    - by 5YrsLaterDBA
    I was gaven a C++ dll file, a lib file and a header file. I need to call them from my C# application. header file looks like this: class Clog ; class EXPORT_MACRO NB_DPSM { private: string sFileNameToAnalyze ; Clog *pLog ; void write2log(string text) ; public: NB_DPSM(void); ~NB_DPSM(void); void setFileNameToAnalyze(string FileNameToAnalyze) ; int WriteGenbenchData(string& message) ; }; In my C# code, I have those code: internal ReturnStatus correctDataDLL(string rawDataFileName) { if (rawDataFileName == null || rawDataFileName.Length <= 0) { return ReturnStatus.Return_CannotFindFile; } else { setFileNameToAnalyze(rawDataFileName); } string msg = ""; int returnVal = WriteGenbenchData(ref msg); return ReturnStatus.Return_Success; } [DllImport("..\\..\\thirdParty\\cogs\\NB_DPSM.dll")] public static extern void setFileNameToAnalyze(string fileName); [DllImport("..\\..\\thirdParty\\cogs\\NB_DPSM.dll")] public static extern int WriteGenbenchData(ref string message); I got EntryPointNotFoundException at the setFileNameToAnalyze(rawDataFileName); statement. Few questions: do I need to add that lib file into somewhere of my C# project? how? do I need to add the header file into my C# project? how? (no compile error for now) I would like to remove those "..\\..\\thirdParty\\cogs\\" hardcode path. how to this? how to get ride of that EntryPointNotFoundException? thanks,

    Read the article

  • Shaping EF LINQ Query Results Using Multi-Table Includes

    - by sisdog
    I have a simple LINQ EF query below using the method syntax. I'm using my Include statement to join four tables: Event and Doc are the two main tables, EventDoc is a many-to-many link table, and DocUsage is a lookup table. My challenge is that I'd like to shape my results by only selecting specific columns from each of the four tables. But, the compiler is giving a compiler is giving me the following error: 'System.Data.Objects.DataClasses.EntityCollection does not contain a definition for "Doc' and no extension method 'Doc' accepting a first argument of type 'System.Data.Objects.DataClasses.EntityCollection' could be found. I'm sure this is something easy but I'm not figuring it out. I haven't been able to find an example of someone using the multi-table include but also shaping the projection. Thx,Mark var qry= context.Event .Include("EventDoc.Doc.DocUsage") .Select(n => new { n.EventDate, n.EventDoc.Doc.Filename, //<=COMPILER ERROR HERE n.EventDoc.Doc.DocUsage.Usage }) .ToList(); EventDoc ed; Doc d = ed.Doc; //<=NO COMPILER ERROR SO I KNOW MY MODEL'S CORRECT DocUsage du = d.DocUsage;

    Read the article

  • linked list sort function only loops once

    - by Tristan Pearce
    i have a singly linked list that i am trying to sort from least to greatest by price. here is what i have so far struct part { char* name; float price; int quantity; struct part *next; }; typedef struct part partType; partType *sort_price(partType **item) { partType *temp1 = *item; partType *temp2 = (*item)->next; if ( *item == NULL || (*item)->next == NULL ) return *item; else { while ( temp2 != NULL && temp2->next != NULL ){ if (temp2->price > temp2->next->price){ temp1->next = temp2->next; temp2->next = temp2->next->next; temp1->next->next = temp2; } temp1 = temp2; temp2 = temp2->next; } } return *item; } the list is already populated but when i call the sort function it only swaps the first two nodes that satisfy the condition in the if statement. I dont understand why it doesnt do the check again after the two temp pointers are incremented.

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >