Search Results

Search found 18220 results on 729 pages for 'null hypothesis'.

Page 105/729 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • When should EntityManagerFactory instance be created/opened ?

    - by masato-san
    Ok, I read bunch of articles/examples how to write Entity Manager Factory in singleton. One of them easiest for me to understand a bit: http://javanotepad.blogspot.com/2007/05/jpa-entitymanagerfactory-in-web.html I learned that EntityManagerFactory (EMF) should only be created once preferably in application scope. And also make sure to close the EMF once it's used (?) So I wrote EMF helper class for business methods to use: public class EmProvider { private static final String DB_PU = "KogaAlphaPU"; public static final boolean DEBUG = true; private static final EmProvider singleton = new EmProvider(); private EntityManagerFactory emf; private EmProvider() {} public static EmProvider getInstance() { return singleton; } public EntityManagerFactory getEntityManagerFactory() { if(emf == null) { emf = Persistence.createEntityManagerFactory(DB_PU); } if(DEBUG) { System.out.println("factory created on: " + new Date()); } return emf; } public void closeEmf() { if(emf.isOpen() || emf != null) { emf.close(); } emf = null; if(DEBUG) { System.out.println("EMF closed at: " + new Date()); } } }//end class And my method using EmProvider: public String foo() { EntityManager em = null; List<Object[]> out = null; try { em = EmProvider.getInstance().getEntityManagerFactory().createEntityManager(); Query query = em.createNativeQuery(JPQL_JOIN); //just some random query out = query.getResultList(); } catch(Exception e) { //handle error.... } finally { if(em != null) { em.close(); //make sure to close EntityManager } } I made sure to close EntityManager (em) within method level as suggested. But when should EntityManagerFactory be closed then? And why EMF has to be singleton so bad??? I read about concurrency issues but as I am not experienced multi-thread-grammer, I can't really be clear on this idea.

    Read the article

  • FluentNHibernate: Not.Nullable() doesn't affect output schema

    - by alex
    Hello I'm using fluent nhibernate v. 1.0.0.595. There is a class: public class Weight { public virtual int Id { get; set; } public virtual double Value { get; set; } } I want to map it on the following table: create table [Weight] ( WeightId INT IDENTITY NOT NULL, Weight DOUBLE not null, primary key (WeightId) ) Here is the map: public class WeightMap : ClassMap<Weight> { public WeightMap() { Table("[Weight]"); Id(x => x.Id, "WeightId"); Map(x => x.Value, "Weight").Not.Nullable(); } } The problem is that this mapping produces table with nullable Weight column: Weight DOUBLE null Not-nullable column is generated only with default convention for column name (i.e. Map(x = x.Value).Not.Nullable() instead of Map(x = x.Value, "Weight").Not.Nullable()), but in this case there will be Value column instead of Weight: create table [Weight] ( WeightId INT IDENTITY NOT NULL, Value DOUBLE not null, primary key (WeightId) ) I found similiar problem here: http://code.google.com/p/fluent-nhibernate/issues/detail?id=121, but seems like mentioned workaround with SetAttributeOnColumnElement("not-null", "true") is outdated. Does anybody encountered with this problem? Is there a way to specify named column as not-nullable?

    Read the article

  • Converting rows to Columns in SQL

    - by Ram
    I have a table (actually a view, but simplified my example to a table) which gives me some data like this id CompanyName website 1 Google google.com 2 Google google.net 3 Google google.org 4 Google google.in 5 Google google.de 6 Microsoft Microsoft.com 7 Microsoft live.com 8 Microsoft bing.com 9 Microsoft hotmail.com I am looking to convert it to get a result like this CompanyName website1 website2 website3 website 4 website5 website6 ----------- ------------- ---------- ---------- ----------- --------- -------- Google google.com google.net google.org google.in google.de NULL Microsoft Microsoft.com live.com bing.com hotmail.com NULL NULL I have looked into pivot but looks like the record(row values) cannot be dynamic (i.e can only be certain predefined values). Also, if there are more than 6 websites, I want to limit it to the first 6 Dynamic pivot makes sense, but I would have to incorporate it into my view ?? Is there a simpler solution for this ? Here are the SQL scripts CREATE TABLE [dbo].[Company]( [id] [int] NULL, [CompanyName] [varchar](50) NULL, [website] [varchar](50) NULL ) ON [PRIMARY] GO insert into company values (1,'Google','google.com') insert into company values (2,'Google','google.net') insert into company values (3,'Google','google.org') insert into company values (4,'Google','google.in') insert into company values (5,'Google','google.de') insert into company values (6,'Microsoft','Microsoft.com') insert into company values (7,'Microsoft','live.com') insert into company values (8,'Microsoft','bing.com') insert into company values (9,'Microsoft','hotmail.com')

    Read the article

  • Play playlist with MediaPlayer

    - by Kaloer
    Hi, I'm trying to play a playlist I get using the MediaStore provider. However, when I try playing a playlist nothing happens. Can a MediaPlayer play a playlist (m3u file) and do I need to set the first track to play ? This is my test code in the onCreate() method: Uri uri = MediaStore.Audio.Playlists.EXTERNAL_CONTENT_URI; if(uri == null) { Log.e("Uri = null"); } String[] projection = new String[] { MediaStore.Audio.Playlists._ID, MediaStore.Audio.Playlists.NAME, MediaStore.Audio.Playlists.DATA }; Cursor c = managedQuery(uri, projection, null, null, null); if(c == null) { Toast.makeText(getApplicationContext(), R.string.alarm_tone_picker_error, Toast.LENGTH_LONG).show(); return; } if(!c.moveToFirst()) { c.close(); Toast.makeText(getApplicationContext(), R.string.alarm_tone_picker_no_music, Toast.LENGTH_LONG).show(); return; } c.moveToFirst(); try { MediaPlayer player = new MediaPlayer(); player.setDataSource(c.getString(2)); player.start(); } catch(Exception e) { e.printStackTrace(); } I have turned on every volume stream. Thanks you, Kaloer

    Read the article

  • Async task ASP.net HttpContext.Current.Items is empty - How do handle this?

    - by GuruC
    We are running a very large web application in asp.net MVC .NET 4.0. Recently we had an audit done and the performance team says that there were a lot of null reference exceptions. So I started investigating it from the dumps and event viewer. My understanding was as follows: We are using Asyn Tasks in our controllers. We rely on HttpContext.Current.Items hashtable to store a lot of Application level values. Task<Articles>.Factory.StartNew(() => { System.Web.HttpContext.Current = ControllerContext.HttpContext.ApplicationInstance.Context; var service = new ArticlesService(page); return service.GetArticles(); }).ContinueWith(t => SetResult(t, "articles")); So we are copying the context object onto the new thread that is spawned from Task factory. This context.Items is used again in the thread wherever necessary. Say for ex: public class SomeClass { internal static int StreamID { get { if (HttpContext.Current != null) { return (int)HttpContext.Current.Items["StreamID"]; } else { return DEFAULT_STREAM_ID; } } } This runs fine as long as number of parallel requests are optimal. My questions are as follows: 1. When the load is more and there are too many parallel requests, I notice that HttpContext.Current.Items is empty. I am not able to figure out a reason for this and this causes all the null reference exceptions. 2. How do we make sure it is not null ? Any workaround if present ? NOTE: I read through in StackOverflow and people have questions like HttpContext.Current is null - but in my case it is not null and its empty. I was reading one more article where the author says that sometimes request object is terminated and it may cause problems since dispose is already called on objects. I am doing a copy of Context object - its just a shallow copy and not a deep copy.

    Read the article

  • row number over text column sort

    - by Marty Trenouth
    I'm having problems with dynamic sorting using ROW Number in SQL Server. I have it working but it's throwing errors on non numeric fields. What do I need to change to get sorts with Alpha Working??? ID Description 5 Test 6 Desert 3 A evil Ive got a Sql Prodcedure CREATE PROCEDURE [CRUDS].[MyTable_Search] -- Add the parameters for the stored procedure here -- Full Parameter List @ID int = NULL, @Description nvarchar(256) = NULL, @StartIndex int = 0, @Count int = null, @Order varchar(128) = 'ID asc' AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here Select * from ( Select ROW_NUMBER() OVER (Order By case when @Order = 'ID asc' then [TableName].ID when @Order = 'Description asc' then [TableName].Description end asc, case when @Order = 'ID desc' then [TableName].ID when @Order = 'Description desc' then [TableName].Description end desc ) as row, [TableName].* from [TableName] where (@ID IS NULL OR [TableName].ID = @ID) AND (@Description IS NULL OR [TableName].Description = @Description) ) as a where row > @StartIndex and (@Count is null or row <= @StartIndex + @Count) order by case when @Order = 'ID asc' then a.ID when @Order = 'Description asc' then a.Description end asc, case when @Order = 'ID desc' then a.ID when @Order = 'Description desc' then a.Description end desc END

    Read the article

  • Grouping Categorized Data In WPF.

    - by VoidDweller
    Here is what I am trying to do. Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# MVVM I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? Envision this nicely styled. :-) -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Edit: Sorting and Paging is not necessary. I have looked at nested ListViews and DataGrids, dynamically building a Grid. Dynamically building a Grid and leveraging the SharedSizeGroup property seems the most promising strategy, but I am concerned about performance. Would a better approach be to consider this a dynamic report? If so, what should I be looking at? Thanks for your help.

    Read the article

  • g-wan - reproducing the performance claims

    - by user2603628
    Using gwan_linux64-bit.tar.bz2 under Ubuntu 12.04 LTS unpacking and running gwan then pointing wrk at it (using a null file null.html) wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html Running 20s test @ http://127.0.0.1:8080/null.html 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.65s 5.10s 13.89s 83.91% Req/Sec 3.33k 3.65k 12.33k 75.19% 125067 requests in 20.01s, 32.08MB read Socket errors: connect 0, read 37, write 0, timeout 49 Requests/sec: 6251.46 Transfer/sec: 1.60MB .. very poor performance, in fact there seems to be some kind of huge latency issue. During the test gwan is 200% busy and wrk is 67% busy. Pointing at nginx, wrk is 200% busy and nginx is 45% busy: wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html Thread Stats Avg Stdev Max +/- Stdev Latency 371.81us 134.05us 24.04ms 91.26% Req/Sec 72.75k 7.38k 109.22k 68.21% 2740883 requests in 20.00s, 540.95MB read Requests/sec: 137046.70 Transfer/sec: 27.05MB Pointing weighttpd at nginx gives even faster results: /usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 167 concurrent requests, 666667 total requests spawning thread #2: 167 concurrent requests, 666667 total requests spawning thread #3: 166 concurrent requests, 666666 total requests progress: 9% done progress: 19% done progress: 29% done progress: 39% done progress: 49% done progress: 59% done progress: 69% done progress: 79% done progress: 89% done progress: 99% done finished in 7 sec, 13 millisec and 293 microsec, 285172 req/s, 57633 kbyte/s requests: 2000000 total, 2000000 started, 2000000 done, 2000000 succeeded, 0 failed, 0 errored status codes: 2000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 413901205 bytes total, 413901205 bytes http, 0 bytes data The server is a virtual 8 core dedicated server (bare metal), under KVM Where do I start looking to identify the problem gwan is having on this platform ? I have tested lighttpd, nginx and node.js on this same OS, and the results are all as one would expect. The server has been tuned in the usual way with expanded ephemeral ports, increased ulimits, adjusted time wait recycling etc.

    Read the article

  • Unreleased Resource: Streams

    - by Vibhas
    Hi freinds i am getting a warning in the fortify report for the following code: if (null != serverSocket) { OutputStream socketOutPutStream = serverSocket .getOutputStream(); if (null != socketOutPutStream) { oos = new ObjectOutputStream(socketOutPutStream); if (null != oos) { int c; log.info("i am in Step 3 ooss " + oos); while ((c = mergedIS.read()) != -1) { oos.writeByte(c); } } log.info("i am in Step 4 "); } } in the catch block i have mentioned : catch (UnknownHostException e) { //catch exception Vibhas added log.info("UnknownHostException occured"); } catch (IOException e) { //catch exception Vibhas added log.info("IOException occured"); } catch (Exception e) { //catch exception //log.info("error occured in copyFile in utils-->"+e.getMessage()+"file name is-->"+destiFileName); }finally{ if (null != oos){ oos.flush(); oos.close(); } catch (Exception e) { //catch exception } } Warning which i am getting in the fortify report is: Abstract: The function copyFile() in ODCUtil.java sometimes fails to release a system resource allocated by getOutputStream() on line 61. Sink: ODCUtil.java:64 oos = new ObjectOutputStream(socketOutPutStream)() 62 if (null != socketOutPutStream) { 63 64 oos = new ObjectOutputStream(socketOutPutStream); 65 if (null != oos) { 66 int c;

    Read the article

  • efficient thread-safe singleton in C++

    - by user168715
    The usual pattern for a singleton class is something like static Foo &getInst() { static Foo *inst = NULL; if(inst == NULL) inst = new Foo(...); return *inst; } However, it's my understanding that this solution is not thread-safe, since 1) Foo's constructor might be called more than once (which may or may not matter) and 2) inst may not be fully constructed before it is returned to a different thread. One solution is to wrap a mutex around the whole method, but then I'm paying for synchronization overhead long after I actually need it. An alternative is something like static Foo &getInst() { static Foo *inst = NULL; if(inst == NULL) { pthread_mutex_lock(&mutex); if(inst == NULL) inst = new Foo(...); pthread_mutex_unlock(&mutex); } return *inst; } Is this the right way to do it, or are there any pitfalls I should be aware of? For instance, are there any static initialization order problems that might occur, i.e. is inst always guaranteed to be NULL the first time getInst is called?

    Read the article

  • Append data to same text file

    - by Manu
    SimpleDateFormat formatter = new SimpleDateFormat("ddMMyyyy_HHmmSS"); String strCurrDate = formatter.format(new java.util.Date()); String strfileNm = "Customer_" + strCurrDate + ".txt"; String strFileGenLoc = strFileLocation + "/" + strfileNm; String Query1="select '0'||to_char(sysdate,'YYYYMMDD')||'123456789' class_code from dual"; String Query2="select '0'||to_char(sysdate,'YYYYMMDD')||'123456789' class_code from dual"; try { Statement stmt = null; ResultSet rs = null; Statement stmt1 = null; ResultSet rs1 = null; stmt = conn.createStatement(); stmt1 = conn.createStatement(); rs = stmt.executeQuery(Query1); rs1 = stmt1.executeQuery(Query2); File f = new File(strFileGenLoc); OutputStream os = (OutputStream)new FileOutputStream(f,true); String encoding = "UTF8"; OutputStreamWriter osw = new OutputStreamWriter(os, encoding); BufferedWriter bw = new BufferedWriter(osw); while (rs.next() ) { bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.write(" "); } bw.flush(); bw.close(); } catch (Exception e) { System.out.println( "Exception occured while getting resultset by the query"); e.printStackTrace(); } finally { try { if (conn != null) { System.out.println("Closing the connection" + conn); conn.close(); } } catch (SQLException e) { System.out.println( "Exception occured while closing the connection"); e.printStackTrace(); } } return objArrayListValue; } The above code is working fine. it writes the content of "rs" resultset data in text file Now what i want is ,i need to append the the content in "rs2" resultset to the "same text file"(ie . i need to append "rs2" content with "rs" content in the same text file)..

    Read the article

  • Time gaps between host clEnqueue_xxx calls

    - by dialer
    Consider these OpenCL calls (3 memcpy DtoH, 4313 cl_float elements each): clEnqueueReadBuffer(CommandQueue, SpectrumAbsMem, CL_FALSE, 0, SpectrumMemSize, SpectrumAbs, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumReMem, CL_FALSE, 0, SpectrumMemSize, SpectrumRe, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumImMem, CL_FALSE, 0, SpectrumMemSize, SpectrumIm, 0, NULL, NULL); When I analyze these with the NVIDIA visual profiler, I see that the actual memcpy operation only takes 8 us, but there is a significant gap of around 130 us after each memcpy. I'm already using the supposedly asynchronous method (the CL_FALSE in the argument list). When I use only one operation, but with three times the size, the operation is way faster. Why is the time gap between the actual memcpy operations so huge, whereas the gap between the kernel execution (exactly before these three operations) and the first memcpy is only 7us? Can I get rid of it, or do I need to accumulate more data before starting a memcpy? If so, is there a convenient way how I could combine mutliple arrays into a single contiguous block of memory, but still have a cl_mem object as a separate device memory pointer to each section?

    Read the article

  • nservicebus deleting subscription record after inserting it?

    - by Justin Holbrook
    I have been playing with nservicebus for a few weeks now and since everything was going well on my local machine I decided to try to set up a test environment and work on deployment. I am using the generic host that comes with nservicebus and was using the nservicebus.Integration profile when running locally, but would like to use Nservicebus.Production in the test environment. I set up a sql server 2008 database, made changes to my app.config and everything seemed to work fine. But after a few attempts, I noticed messages were not being picked up by my subscriber. I checked the subscription table and it was empty. Upon examination of the logs I noticed the following: 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Insert 0: INSERT INTO [Subscription] (SubscriberEndpo int, MessageType) VALUES (?, ?) 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Update 0: 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Delete 0: DELETE FROM [Subscription] WHERE Subscriber Endpoint = ? AND MessageType = ? Why would it insert then delete my subscription right afterwards? To try to rule out a nhibernate dialect issue I tried switching my subscription storage to an oracle 10g database. It behaved exactly the same, it worked the first 2 times, then I started seeing my subscriptions being deleted right after they were inserted. Any ideas what is causing this behavior?

    Read the article

  • Small openmp programm freezes sometimes (gcc, c, linux)

    - by osgx
    Hello Just write a small omp test, and it does not work correctly all the times: #include <omp.h> int main() { int i,j=0; #pragma omp parallel for(i=0;i<1000;i++) { #pragma omp barrier j+= j^i; } return j; } The usage of j for writing from all threads is incorrect in this example, BUT there must be only nondeterministic value of j I have a freeze. Compiled with gcc-4.3.1 -fopenmp a.c -o gcc -static Run on 4-core x86_Core2 Linux server: $ ./gcc and got freeze (sometimes; like 1 freeze for 4-5 fast runs). Strace: [pid 13118] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3014, FUTEX_WAIT, 2, NULL <unfinished ...> [pid 13120] <... futex resumed> ) = 0 [pid 13119] futex(0x80d3014, FUTEX_WAIT, 2, NULL <unfinished ...> [pid 13120] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13120] futex(0x80cd798, FUTEX_WAIT, 1, NULL <unfinished ...> [pid 13109] <... futex resumed> ) = 0 [pid 13109] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13109] futex(0x80d3020, FUTEX_WAIT, 251, NULL <unfinished ...> [pid 13118] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13119] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3020, FUTEX_WAIT, 251, NULL <unfinished ...> [pid 13119] futex(0x80d3014, FUTEX_WAKE, 1) = 0 [pid 13119] futex(0x80d3020, FUTEX_WAIT, 251, NULL <freeze> Why do I have a freeze (deadlock)?

    Read the article

  • Recursive breadth-first travel function in Java or C++?

    - by joejax
    Here is a java code for breadth-first travel: void breadthFirstNonRecursive(){ Queue<Node> queue = new java.util.LinkedList<Node>(); queue.offer(root); while(!queue.isEmpty()){ Node node = queue.poll(); visit(node); if (node.left != null) queue.offer(node.left); if (node.right != null) queue.offer(node.right); } } Is it possible to write a recursive function to do the same? At first, I thought this would be easy, so I came out with this: void breadthFirstRecursive(){ Queue<Node> q = new LinkedList<Node>(); breadthFirst(root, q); } void breadthFirst(Node node, Queue<Node> q){ if (node == null) return; q.offer(node); Node n = q.poll(); visit(n); if (n.left != null) breadthFirst(n.left, q); if (n.right != null) breadthFirst(n.right, q); } Then I found it doesn't work. It is actually does the same thing as this: void preOrder(Node node) { if (node == null) return; visit(node); preOrder(node.left); preOrder(node.right); } Has any one thought about this before?

    Read the article

  • Contacts & Autocomplete

    - by Vince
    First post. I'm new to android and programming in general. What I'm attempting to is to have an autocomplete text box pop up with auto complete names from the contact list. IE, if they type in "john" it will say "John Smith" or any john in their contacts. The code is basic, I pulled it from a few tutorials. private void autoCompleteBox() { ContentResolver cr = getContentResolver(); Uri contacts = Uri.parse("content://contacts/people"); Cursor managedCursor1 = cr.query(contacts, null, null, null, null); if (managedCursor1.moveToFirst()) { String contactname; String cphoneNumber; int nameColumn = managedCursor1.getColumnIndex("name"); int phoneColumn = managedCursor1.getColumnIndex("number"); Log.d("int Name", Integer.toString(nameColumn)); Log.d("int Number", Integer.toString(phoneColumn)); do { // Get the field values contactname = managedCursor1.getString(nameColumn); cphoneNumber = managedCursor1.getString(phoneColumn); if ((contactname != " " || contactname != null) && (cphoneNumber != " " || cphoneNumber != null)) { c_Name.add(contactname); c_Number.add(cphoneNumber); Toast.makeText(this, contactname, Toast.LENGTH_SHORT) .show(); } } while (managedCursor1.moveToNext()); } name_Val = (String[]) c_Name.toArray(new String[c_Name.size()]); phone_Val = (String[]) c_Number.toArray(new String[c_Name.size()]); ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_dropdown_item_1line, name_Val); personName.setAdapter(adapter); } personName is my autocompletetextbox. So it actually works when I use it in the emulator (4.2) with manually entered contacts through the people app, but when I use it on my device, it will not pop up with any names. I'm sure it's something ridiculous but I've tried to find the answer and I'm getting nowhere. Can't learn if I don't ask.

    Read the article

  • Performance of stored proc when updating columns selectively based on parameters?

    - by kprobst
    I'm trying to figure out if this is relatively well-performing T-SQL (this is SQL Server 2008). I need to create a stored procedure that updates a table. The proc accepts as many parameters as there are columns in the table, and with the exception of the PK column, they all default to NULL. The body of the procedure looks like this: CREATE PROCEDURE proc_repo_update @object_id bigint ,@object_name varchar(50) = NULL ,@object_type char(2) = NULL ,@object_weight int = NULL ,@owner_id int = NULL -- ...etc AS BEGIN update object_repo set object_name = ISNULL(@object_name, object_name) ,object_type = ISNULL(@object_type, object_type) ,object_weight = ISNULL(@object_weight, object_weight) ,owner_id = ISNULL(@owner_id, owner_id) -- ...etc where object_id = @object_id return @@ROWCOUNT END So basically: Update a column only if its corresponding parameter was provided, and leave the rest alone. This works well enough, but as the ISNULL call will return the value of the column if the received parameter was null, will SQL Server optimize this somehow? This might be a performance bottleneck on the application where the table might be updated heavily (insertion will be uncommon so the performance there is not a problem). So I'm trying to figure out what's the best way to do this. Is there a way to condition the column expressions with something like CASE WHEN or something? The table will be indexed up the wazoo as well for read performance. Is this the best approach? My alternative at this point is to create the UPDATE expression in code (e.g. inline SQL) and execute it against the server. This would solve my doubts about performance, but I'd rather leave this in a stored proc if possible.

    Read the article

  • I get a "TypeError: Error #1009: Cannot access a property or method of a null object reference." error on my AIR project for using a button.

    - by Xcore
    So my problem here is, I'm working on my Adobe Air project, so I decided to code some buttons to be able to navigate. The problem here is that I get a error for trying to do so. Here is my code. import flash.events.MouseEvent; this.stop(); play_btn.addEventListener(MouseEvent.MOUSE_DOWN, playButtonClick); function playButtonClick(evt:MouseEvent) { gotoAndPlay(337); } I do not see what is wrong actually, I tried this on a blank non-AIR file, and it worked well. Thanks for helping!

    Read the article

  • Has anyone ever successfully make index merge work for MySQL?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing? Update mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 1863 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Version: mysql> select version(); +----------------------+ | version() | +----------------------+ | 5.1.36-community-log | +----------------------+ Has anyone ever successfully make index merge work for MySQL? I'll be glad to see successful stories here:)

    Read the article

  • char array split ip with strtok

    - by user1480139
    I'm trying to split a IP address like 127.0.0.1 from a file: using following C code: pch2 = strtok (ip,"."); printf("\npart 1 ip: %s",pch2); pch2 = strtok (NULL,"."); printf("\npart 2 ip: %s",pch2); And IP is a char ip[500], that containt an ip. When printing it prints 127 as part 1 but as part 2 it prints NULL? Can someone help me? EDIT: Whole function: FILE *file = fopen ("host.txt", "r"); char * pch; char * pch2; char ip[BUFFSIZE]; IPPart result; if (file != NULL) { char line [BUFFSIZE]; while(fgets(line,sizeof line,file) != NULL) { if(line[0] != '#') { //fputs(line,stdout); pch = strtok (line," "); printf ("%s\n",pch); strncpy(ip, pch, sizeof(pch)-1); ip[sizeof(pch)-1] = '\0'; //pch = strtok (line, " "); pch = strtok (NULL," "); printf("%s",pch); pch2 = strtok (ip,"."); printf("\nDeel 1 ip: %s",pch2); pch2 = strtok (NULL,"."); printf("\nDeel 2 ip: %s",pch2); //if(strcmp(pch,url) == 0) //{ // result.part1 = //} } } fclose(file); }

    Read the article

  • Question on SQL Grouping

    - by Lijo
    Hi Team, I am trying to achieve the following without using sub query. For a funding, I would like to select the latest Letter created date and the ‘earliest worklist created since letter created’ date for a funding. FundingId Leter (1, 1/1/2009 )(1, 5/5/2009) (1, 8/8/2009) (2, 3/3/2009) FundingId WorkList (1, 5/5/2009 ) (1, 9/9/2009) (1, 10/10/2009) (2, 2/2/2009) Expected Result - FundingId Leter WorkList (1, 8/8/2009, 9/9/2009) I wrote a query as follows. It has a bug. It will omit those FundingId for which the minimum WorkList date is less than latest Letter date (even though it has another worklist with greater than letter created date). CREATE TABLE #Funding( [Funding_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_No] [int] NOT NULL, CONSTRAINT [PK_Center_Center_ID] PRIMARY KEY NONCLUSTERED ([Funding_ID] ASC) ) ON [PRIMARY] CREATE TABLE #Letter( [Letter_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_Letter_Letter_ID] PRIMARY KEY NONCLUSTERED ([Letter_ID] ASC) ) ON [PRIMARY] CREATE TABLE #WorkList( [WorkList_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_WorkList_WorkList_ID] PRIMARY KEY NONCLUSTERED ([WorkList_ID] ASC) ) ON [PRIMARY] SELECT F.Funding_ID, Funding_No, MAX (L.CreatedDt), MIN(W.CreatedDt) FROM #Funding F INNER JOIN #Letter L ON L.Funding_ID = F.Funding_ID LEFT OUTER JOIN #WorkList W ON W.Funding_ID = F.Funding_ID GROUP BY F.Funding_ID,Funding_No HAVING MIN(W.CreatedDt) MAX (L.CreatedDt) How can I write a correct query without using subquery? Please help Thanks Lijo

    Read the article

  • Linked List Inserting strings in alphabetical order

    - by user69514
    I have a linked list where each node contains a string and a count. my insert method needs to inset a new node in alphabetical order based on the string. if there is a node with the same string, then i increment the count. the problem is that my method is not inserting in alphabetical order public Node findIsertionPoint(Node head, Node node){ if( head == null) return null; Node curr = head; while( curr != null){ if( curr.getValue().compareTo(node.getValue()) == 0) return curr; else if( curr.getNext() == null || curr.getNext().getValue().compareTo(node.getValue()) > 0) return curr; else curr = curr.getNext(); } return null; } public void insert(Node node){ Node newNode = node; Node insertPoint = this.findIsertionPoint(this.head, node); if( insertPoint == null) this.head = newNode; else{ if( insertPoint.getValue().compareTo(node.getValue()) == 0) insertPoint.getItem().incrementCount(); else{ newNode.setNext(insertPoint.getNext()); insertPoint.setNext(newNode); } } count++; }

    Read the article

  • Encrypting a file in win API

    - by Kristian
    hi I have to write a windows api code that encrypts a file by adding three to each character. so I wrote this now its not doing anything ... where i go wronge #include "stdafx.h" #include <windows.h> int _tmain(int argc, _TCHAR* argv[]) { HANDLE filein,fileout; filein=CreateFile (L"d:\\test.txt",GENERIC_READ,0,NULL,OPEN_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL); fileout=CreateFile (L"d:\\test.txt",GENERIC_WRITE,0,NULL,CREATE_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL); DWORD really; //later this will be used to store how many bytes I succeed to read do { BYTE x[1024]; //the buffer the thing Im using to read in ReadFile(filein,x,1024,&really,NULL); for(int i=0 ; i<really ; i++) { x[i]= (x[i]+3) % 256; } DWORD really2; WriteFile(fileout,x,really,&really2,NULL); }while(really==1024); CloseHandle(filein); CloseHandle(fileout); return 0; } and if Im right how can i know its ok

    Read the article

  • MySql product\tag query optimisation - please help!

    - by Nige
    Hi There I have an sql query i am struggling to optimise. It basically is used to pull back products for a shopping cart. The products each have tags attached using a many to many table product_tag and also i pull back a store name from a separate store table. Im using group_concat to get a list of tags for the display (this is why i have the strange groupby orderby clauses at the bottom) and i need to order by dateadded, showing the latest scheduled product first. Here is the query.... SELECT products.*, stores.name, GROUP_CONCAT(tags.taglabel ORDER BY tags.id ASC SEPARATOR " ") taglist FROM (products) JOIN product_tag ON products.id=product_tag.productid JOIN tags ON tags.id=product_tag.tagid JOIN stores ON products.cid=stores.siteid WHERE dateadded < '2010-05-28 07:55:41' GROUP BY products.id ASC ORDER BY products.dateadded DESC LIMIT 2 Unfortunately even with a small set of data (3 tags and about 12 products) the query is taking 00.0034 seconds to run. Eventually i want to have about 2000 products and 50 tagsin this system (im guessing this will be very slooooow). Here is the ExplainSql... id|select_type|table|type|possible_keys|key|key_len|ref|rows|Extra 1|SIMPLE|tags|ALL|PRIMARY|NULL|NULL|NULL|4|Using temporary; Using filesort 1|SIMPLE|product_tag|ref|tagid,productid|tagid|4|cs_final.tags.id|2| 1|SIMPLE|products|eq_ref|PRIMARY,cid|PRIMARY|4|cs_final.product_tag.productid|1|Using where 1|SIMPLE|stores|ALL|siteid|NULL|NULL|NULL|7|Using where; Using join buffer Can anyone help?

    Read the article

  • Handling nulls in Datawarehouse

    - by rrydman
    I'd like to ask your input on what the best practice is for handling null or empty data values when it pertains to data warehousing and SSIS/SSAS. I have several fact and dimension tables that contain null values in different rows. Specifics: 1) What is the best way to handle null date/times values? Should I make a 'default' row in my time or date dimensions and point SSIS to the default row when there is a null found? 2) What is the best way to handle nulls/empty values inside of dimension data. Ex: I have some rows in an 'Accounts' dimensions that have empty (not NULL) values in the Account Name column. Should I convert these empty or null values inside the column to a specific default value? 3) Similar to point 1 above - What should I do if I end up with a Facttable row that has no record in one of the dimension columns? Do I need default dimension records for each dimension in case this happens? 4) Any suggestion or tips in regards to how to handle these operation in Sql server integration services (SSIS)? Best data flow configurations or best transformation objects to use would be helpful. Thanks :-)

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >