Search Results

Search found 940 results on 38 pages for 'g st'.

Page 14/38 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Java Spotlight Episode 76: Pro Java FX2 - A Definative Guide to Rich Clients with Java Technology

    - by Roger Brinkley
    Tweet An interview with the authors of Pro Java FX2: A Definative Guide to Rich Clients with Java Technology. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Angela Caicedo has created 3 new Java FX screen cast videos on java UTube channel: Part 1: Building your First Java FX Application with Netbeans 7.1, Part 2: Building your First Java FX Application with Netbeans 7.1, and Getting Started with Scene Builder.  Events March 26-29, EclipseCon, Reston, USA March 27, Virtual Developer Days - Java (Asia Pacific (English)),9:30 am to 2:00pm IST / 12:00pm to 4.30pm SGT  / 3.00pm - 7.30pm AEDT April 4-5, JavaOne Japan, Tokyo, Japan April 12, GreenJUG, Greenville, SC April 17-18, JavaOne Russia, Moscow Russia April 18–20, Devoxx France, Paris, France April 26, Mix-IT, Lyon, France, May 3-4, JavaOne India, Hyderabad, India Feature InterviewPro JavaFX 2: A Definitive Guide to Rich Clients with Java Technology is available from Amazon.com in either paperback or on the Kindle.James L. (Jim) Weaver is a Java and JavaFX developer, author, and speaker with a passion for helping rich-client Java and JavaFX become preferred technologies for new application development. Books that Jim has authored include Inside Java, Beginning J2EE, and Pro JavaFX Platform, with the latter being updated to cover JavaFX 2.0. His professional background includes 15 years as a systems architect at EDS, and the same number of years as an independent developer. Jim is an international speaker at software technology conferences, including the JavaOne conferences in San Francisco and São Paulo. Jim blogs at http://javafxpert.com, tweets @javafxpert. Weiqi Gao is a principal software engineer with Object Computing, Inc., in St. Louis, MO. He has more than 18 years of software development experience and has been using Java technology since 1998. He is interested in programming languages, object-oriented systems, distributed computing, and graphical user interfaces. He is a presenter and a member of the steering committee of the St. Louis Java Users Group. Weiqi holds a PhD in mathematics. Stephen Chin is chief agile methodologist at GXS and a technical expert in client UI technologies. He is lead author on the Pro Android Flash title and coauthored the Pro JavaFX Platform title, which is the leading technical reference for JavaFX. In addition, Stephen runs the very successful Silicon Valley JavaFX User Group, which has hundreds of members and tens of thousands of online viewers. Finally, he is a Java Champion, chair of the OSCON Java conference, and an internationally recognized speaker featured at Devoxx, Codemash, AnDevCon, Jazoon, and JavaOne, where he received a Rock Star Award. Stephen can be followed on twitter @steveonjava and reached via his blog: http://steveonjava.com.Dean Iverson has been writing software professionally for more than 15 years. He is employed by the Virginia Tech Transportation Institute, where he is a rich client application developer. He also has a small software consultancy called Pleasing Software Solutions, which he cofounded with his wife. Johan Vos started to work with Java in 1995. As part of the Blackdown team, he helped port Java to Linux. With LodgON, the company he cofounded, he has been mainly working on Java-based solutions for social networking software. Because he can't make a choice between embedded development and enterprise development, his main focus is on end-to-end Java, combining the strengths of backend systems and embedded devices. His favorite technologies are currently Java EE/Glassfish at the backend and JavaFX at the frontend. Johan's blog can be followed at http://blogs.lodgon.com/johan, he tweets at http://twitter.com/johanvos. Mail Bag What’s Cool Gerrit Grunwald's SteelSeries FX Experience Tools Canned Animations ComboBox

    Read the article

  • Oracle OpenWorld Update: Oracle GoldenGate Customer Panels

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} We are two weeks out from the start of Oracle OpenWorld 2012. The Data Integration team has a solid line-up of product and customer sessions for you to attend this year, plus five hands-on labs, and numerous demonstration pods in Moscone South. On Monday we kick the track off with Brad Adelberg’s Future Strategy, Direction and Roadmap for Oracle’s Data Integration Platform at 10:45AM in Moscone West 3005. Over the rest of the week we have a number of deep dive sessions that build out the themes that Brad discusses in his keynote, but the two that I would like to highlight today are our Oracle GoldenGate customer panels. The first customer panel is on Zero Downtime Operations and is on Monday at 1:45 in Moscone West 3005. The theme of this session is how to reduce downtime for critical must-succeed systems. Here’s a rundown of the session: Bank of America, TALX, and St. Jude Medical all have users communities that expect systems to be available around the clock. In this customer panel session, Bank of America discusses how it will be leveraging Oracle GoldenGate. St. Jude Medical shares how it is using Oracle GoldenGate to achieve a zero-downtime migration for a 5 TB Oracle online transaction processing (OLTP) 24/7 mission-critical database. TALX discusses how Equifax Workforce Information Services used Oracle GoldenGate to move from processing online transactions in a single site to processing concurrently from two geographically disparate data centers, providing a highly available solution with significant burst capacity. On Tuesday at 11:45 in Moscone West 3005 we switch gears and host a customer panel on Operational Reporting. The theme of this customer panel is all around reporting and how Oracle GoldenGate raises the bar on reporting by enabling real-time access to real-time data. Here’s a rundown of the session: Turk Telekom and Comcast are half a world away from each other, but these two powerhouse companies have both drastically improved performance and access to real-time data by using Oracle GoldenGate. During this panel discussion, Turk Telekom will explain its evaluation and implementation of Oracle GoldenGate, how the business has experienced significant improvements in the core database and reporting platform, and how it plans to expand its usage into its SOA architecture and its architecture based on Oracle’s Siebel platform. Comcast will explain its implementation of Oracle GoldenGate and how it moves data in real time from its mission-critical HP NonStop database to a Teradata data warehouse. Join us at our sessions to learn what other customers are doing with our products or stop by our demo pods in Moscone south and meet the product management and development teams.

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • SQL SERVER – Asynchronous Update and Timestamp – Check if Row Values are Changed Since Last Retrieve

    - by pinaldave
    Here is the question received just this morning. “Pinal, Our application is much different than other application you might have come across. In simple words, I would like to call it Asynchronous Updated Application. We need your quick opinion about one of the situation which we are facing. From business side: We have bidding system (similar to eBay but not exactly) and where multiple parties bid on one item, during the last few minutes of bidding many parties try to bid at the same time with the same price. When they hit submit, we would like to check if the original data which they retrieved is changed or not. If the original data which they have retrieved is the same, we will accept their new proposed price. If original data are changed, they will have to resubmit the data with new price. From technical side: We have a row which we retrieve in our application. Multiple users are retrieving the same row. Some of the users will update the value of the row and submit. However, only the very first user should be allowed to update the row and remaining all the users will have to re-fetch the row and updated it once again. We do not want to lock any record as that will create other problems. Do you have any solution for this kind of situation?” Fantastic Question. I believe there is good chance that we can use timestamp datatype in this kind of application. Before we continue let us see following simple example. USE tempdb GO CREATE TABLE SampleTable (ID INT, Col1 VARCHAR(100), TimeStampCol TIMESTAMP) GO INSERT INTO SampleTable (ID, Col1) VALUES (1, 'FirstVal') GO SELECT ID, Col1, TimeStampCol FROM SampleTable st GO UPDATE SampleTable SET Col1 = 'NextValue' GO SELECT ID, Col1, TimeStampCol FROM SampleTable st GO DROP TABLE SampleTable GO Now let us see the resultset. Here is the simple explanation of the scenario. We created a table with simple column with TIMESTAMP datatype. When we inserted a very first value the timestamp was generated. When we updated any value in that row, the timestamp was updated with the new value. Every single time when we update any value in the row, it will generate new timestamp value. Now let us apply this in an original question’s scenario. In that case multiple users are retrieving the same row. Everybody will have the same now same TimeStamp with them. Before any user update any value they should once again retrieve the timestamp from the table and compare with the timestamp they have with them. If both of the timestamp have the same value – the original row has not been updated and we can safely update the row with the new value. After initial update, now the row will contain a new timestamp. Any subsequent update to the same row should also go to the same process of checking the value of the timestamp they have in their memory. In this case, the timestamp from memory will be different from the timestamp in the row. This indicates that row in the table has changed and new updates should not be allowed. I believe timestamp can be very very useful in this kind of scenario. Is there any better alternative? Please leave a comment with the suggestion and I will post on the blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Custom sectionGroup and Section App.config

    - by fampinheiro
    <configSections> <section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionhandler, Castle.Windsor" /> <sectionGroup name="codegarten"> <section name="configuration" type="Tmp.StartupCodegartenConfigSection, Tmp" /> <section name="apache" type="Tmp.StartupApacheConfigSection, Tmp" /> </sectionGroup> </configSections> When i use msdn main to see all the sections i get this error, Unhandled Exception: System.Configuration.ConfigurationErrorsException: An error occurred creating the configuration section handler for codegarten/apache: Coul d not load type 'Tmp.StartupApacheConfigSection' from assembly 'Tmp'. (D:\Codega rten\trunk\Codegarten\Tmp\bin\Debug\Tmp.exe.Config line 8) ---> System.TypeLoadE xception: Could not load type 'Tmp.StartupApacheConfigSection' from assembly 'Tm p'. at System.Configuration.TypeUtil.GetTypeWithReflectionPermission(IInternalCon figHost host, String typeString, Boolean throwOnError) at System.Configuration.MgmtConfigurationRecord.CreateSectionFactory(FactoryR ecord factoryRecord) at System.Configuration.BaseConfigurationRecord.FindAndEnsureFactoryRecord(St ring configKey, Boolean& isRootDeclaredHere) --- End of inner exception stack trace --- at System.Configuration.BaseConfigurationRecord.FindAndEnsureFactoryRecord(St ring configKey, Boolean& isRootDeclaredHere) at System.Configuration.BaseConfigurationRecord.GetSectionRecursive(String co nfigKey, Boolean getLkg, Boolean checkPermission, Boolean getRuntimeObject, Bool ean requestIsHere, Object& result, Object& resultRuntimeObject) at System.Configuration.ConfigurationSectionCollection.Get(String name) at System.Configuration.ConfigurationSectionCollection.<GetEnumerator>d__0.Mo veNext() at Tmp.Program.ShowSectionGroupInfo(ConfigurationSectionGroup sectionGroup) i n D:\Codegarten\trunk\Codegarten\Tmp\Program.cs:line 53 at Tmp.Program.ShowSectionGroupCollectionInfo(ConfigurationSectionGroupCollec tion sectionGroups) in D:\Codegarten\trunk\Codegarten\Tmp\Program.cs:line 30 at Tmp.Program.Main(String[] args) in D:\Codegarten\trunk\Codegarten\Tmp\Prog ram.cs:line 22 Thanks

    Read the article

  • Spikes in Socket Performance

    - by Harun Prasad
    We are facing random spikes in high throughput transaction processing system using sockets for IPC. Below is the setup used for the run: The client opens and closes new connection for every transaction, and there are 4 exchanges between the server and the client. We have disabled the TIME_WAIT, by setting the socket linger (SO_LINGER) option via getsockopt as we thought that the spikes were caused due to the sockets waiting in TIME_WAIT. There is no processing done for the transaction. Only messages are passed. OS used Centos 5.4 The average round trip time is around 3 milli seconds, but some times the round trip time ranges from 100 milli seconds to couple of seconds. Steps used for Execution and Measurement and output Starting the server $ python sockServerLinger.py /dev/null & Starting the client to post 1 million transactions to the server. And logs the time for a transaction in the client.log file. $ python sockClient.py 1000000 client.log Once the execution finishes the following command will show the execution time greater than 100 milliseconds in the format <line_number>:<execution_time>. $ grep -n "0.[1-9]" client.log | less Below is the example code for Server and Client. Server # File: sockServerLinger.py import socket, traceback,time import struct host = '' port = 9999 l_onoff = 1 l_linger = 0 lingeropt = struct.pack('ii', l_onoff, l_linger) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, lingeropt) s.bind((host, port)) s.listen(1) while 1: try: clientsock, clientaddr = s.accept() print "Got connection from", clientsock.getpeername() data = clientsock.recv(1024*1024*10) #print "asdasd",data numsent=clientsock.send(data) data1 = clientsock.recv(1024*1024*10) numsent=clientsock.send(data) ret = 1 while(ret>0): data1 = clientsock.recv(1024*1024*10) ret = len(data) clientsock.close() except KeyboardInterrupt: raise except: print traceback.print_exc() continue Client # File: sockClient.py import socket, traceback,sys import time i = 0 while 1: try: st = time.time() s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) while (s.connect_ex(('127.0.0.1',9999)) != 0): continue numsent=s.send("asd"*1000) response = s.recv(6000) numsent=s.send("asd"*1000) response = s.recv(6000) i+=1 if i == int(sys.argv[1]): break except KeyboardInterrupt: raise except: print "in exec:::::::::::::",traceback.print_exc() continue print time.time() -st

    Read the article

  • Address Match Key Algorithm

    - by sestocker
    I have a list of addresses in two separate tables that are slightly off that I need to be able to match. For example, the same address can be entered in multiple ways: 110 Test St 110 Test St. 110 Test Street Although simple, you can imagine the situation in more complex scenerios. I am trying to develop a simple algorithm that will be able to match the above addresses as a key. For example. the key might be "11TEST" - first two of 110, first two of Test and first two of street variant. A full match key would also include first 5 of the zipcode as well so in the above example, the full key might look like "11TEST44680". I am looking for ideas for an effective algorithm or resources I can look at for considerations when developing this. Any ideas can be pseudo code or in your language of choice. We are only concerned with US addresses. In fact, we are only looking at addresses from 250 zip codes from Ohio and Michigan. We also do not have access to any postal software although would be open to ideas for cost effective solutions (it would essentially be a one time use). Please be mindful that this is an initial dump of data from a government source so suggestions of how users can clean it are helpful as I build out the application but I would love to have the best initial I possibly can by being able to match addresses as best as possible.

    Read the article

  • Reading from a file, atoi() returns zero only on first element

    - by Nazgulled
    Hi, I don't understand why atoi() is working for every entry but the first one. I have the following code to parse a simple .csv file: void ioReadSampleDataUsers(SocialNetwork *social, char *file) { FILE *fp = fopen(file, "r"); if(!fp) { perror("fopen"); exit(EXIT_FAILURE); } char line[BUFSIZ], *word, *buffer, name[30], address[35]; int ssn = 0, arg; while(fgets(line, BUFSIZ, fp)) { line[strlen(line) - 2] = '\0'; buffer = line; arg = 1; do { word = strsep(&buffer, ";"); if(word) { switch(arg) { case 1: printf("[%s] - (%d)\n", word, atoi(word)); ssn = atoi(word); break; case 2: strcpy(name, word); break; case 3: strcpy(address, word); break; } arg++; } } while(word); userInsert(social, name, address, ssn); } fclose(fp); } And the .csv sample file is this: 900011000;Jon Yang;3761 N. 14th St 900011001;Eugene Huang;2243 W St. 900011002;Ruben Torres;5844 Linden Land 900011003;Christy Zhu;1825 Village Pl. 900011004;Elizabeth Johnson;7553 Harness Circle But this is the output: [900011000] - (0) [900011001] - (900011001) [900011002] - (900011002) [900011003] - (900011003) [900011004] - (900011004) What am I doing wrong?

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • Is there any other efficient way to use table variable instead of using temporary table

    - by varta shrimali
    we are writing script to display banners on a web page where we are using temporary table in mysql procedure. Is there any other efficient way to use table variable instead of using temporary table we are using following code: -- banner location CURSOR -- DECLARE banner_location_cursor CURSOR FOR select bm.id as masterId, bm.section as masterName, bs.id as locationId, bs.sectionName as locationName from banner_master as bm inner join banner_section as bs on bm.id=bs.masterId where bm.section=sCode ; -- DECLARE banner CURSORS DECLARE banner_cursor CURSOR FOR SELECT bd.id as bannerId, bd.sectionId, bd.bannerName, bd.websiteURL, bd.paymentType, bd.status, bd.startDate, bd.endDate, bd.bannerDisplayed, bs.id, bs.sectionName from banner_detail as bd inner join banner_section as bs on bs.id=bd.sectionId where bs.id= location_id and bd.status='A' and (dates between cast(bd.startDate as DATE) and cast(bd.endDate as DATE)) order by rand(), bd.bannerDisplayed asc limit 1 ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET no_more_rows = 1; SET dates = (select curdate()); -- RESULTS TABLE WHICH WILL BE RETURNED -- CREATE temporary TABLE test ( b_id INT, s_id INT, b_name varchar(128), w_url varchar(128), p_type varchar(128), st char(1), s_date datetime, e_date datetime, b_display int, sec_id int, s_name varchar(128) ); -- OPEN banner location CURSOR OPEN banner_location_cursor; the_loop: LOOP FETCH banner_location_cursor INTO master_id, master_name, location_id, location_name; IF no_more_rows THEN CLOSE banner_location_cursor; leave the_loop; END IF; OPEN banner_cursor; -- select FOUND_ROWS(); the_loop2: LOOP FETCH banner_cursor INTO banner_id, section_id, banner_name, website_url, payment, status, start_date, end_date, banner_displayed, sec_id, section_name; IF no_more_rows THEN set no_more_rows = 0; CLOSE banner_cursor; leave the_loop2; END IF; INSERT INTO test ( b_id, s_id, b_name , w_url, p_type, st, s_date, e_date, b_display, sec_id, s_name ) VALUES ( banner_id, section_id, banner_name, website_url, payment, status, start_date, end_date, banner_displayed, sec_id, section_name ); UPDATE banner_detail set bannerDisplayed = (banner_displayed+1) where id = banner_id; END LOOP the_loop2; END LOOP the_loop; -- RETURN result SELECT * FROM test; -- DROP RESULTS TABLE DROP TABLE test; END

    Read the article

  • SQL query in JSP file pulling variable from VXML file

    - by s1066
    Hi I'm trying to get an SQL query to work within a JSP file. The JSP file is pulled by a VXML file here is my JSP file code: <?xml version="1.0"?> <%@ page import="java.util.*" %> <%@ page import="java.sql.*" %> <% boolean success = true; // Always optimistic String info = ""; String schoolname = request.getParameter("schoolname"); String informationtype = request.getParameter("informationtype"); try { Class.forName("org.postgresql.Driver"); String connectString = "jdbc:postgresql://localhost:5435/N0176359"; String user = "****"; String password = "*****"; Connection conn = DriverManager.getConnection(connectString, user, password); Statement st = conn.createStatement(); ResultSet rsvp = st.executeQuery("SELECT * FROM lincolnshire_school_information_new WHERE school_name=\'"+schoolname+"\'"); rsvp.next(); info = rsvp.getString(2); }catch (ClassNotFoundException e) { success = false; // something went wrong } %> As you can see I'm trying to insert the value of the variable declared as "schooname" into the end of the SQL query. However when I come to run the jsp file it doesn't work and I get an error "ResultSet not positioned properly". When I put a standard query in (without trying to make it value of the variable it works fine) Hope that makes sense, and thank you for any help!

    Read the article

  • ajax to populate an input type text

    - by kawtousse
    hi, I have an input type text that i want to populate it with a value from data base using the ajax technique. first i define my text zone like the following: <td><input type=text id='st' value=" " name='stname' onclick="donnom();" /></td> in javascript i do the following: xhr5.onreadystatechange = function(){ if(xhr5.readyState == 4 && xhr5.status == 200) { selects5 = xhr5.responseText; // On se sert de innerHTML pour rajouter les options a la liste document.getElementById('st').innerHTML = selects5; } }; xhr5.open("POST","ajaxIDentifier5.jsp",true); xhr5.setRequestHeader('Content-Type','application/x-www-form-urlencoded'); id=document.getElementById(idIdden).value; xhr5.send("id="+id); in IDentifier5.jsp i put the next code: '<%String id=request.getParameter("id"); System.out.println("idDailyTimeSheet ajaxIDentifier5 as is:"+id); Session s = null; Transaction tx; try { s= HibernateUtil.currentSession(); tx=s.beginTransaction(); Query query = s.createQuery("select from Dailytimesheet dailytimesheet where dailytimesheet.IdDailyTimeSheet="+id+" " ); for(Iterator it=query.iterate();it.hasNext();) { if(it.hasNext()) { Dailytimesheet object=(Dailytimesheet)it.next(); out.print( "<input type=\"text\" id=\"st1\" value=\""+object.getTimeFrom()+"\" name=\"starting\" onclick=\"donnom()\" ></input>"); } } }catch (HibernateException e) { e.printStackTrace();} %> i want to get only the value in the input type text populated from database because after that i will be able to change it . thanks for help.

    Read the article

  • Performance Difference between HttpContext user and Thread user

    - by atrueresistance
    I am wondering what the difference between HttpContext.Current.User.Identity.Name.ToString.ToLower and Thread.CurrentPrincipal.Identity.Name.ToString.ToLower. Both methods grab the username in my asp.net 3.5 web service. I decided to figure out if there was any difference in performance using a little program. Running from full Stop to Start Debugging in every run. Dim st As DateTime = DateAndTime.Now Try 'user = HttpContext.Current.User.Identity.Name.ToString.ToLower user = Thread.CurrentPrincipal.Identity.Name.ToString.ToLower Dim dif As TimeSpan = Now.Subtract(st) Dim break As String = "nothing" Catch ex As Exception user = "Undefined" End Try I set a breakpoint on break to read the value of dif. The results were the same for both methods. dif.Milliseconds 0 Integer dif.Ticks 0 Long Using a longer duration, loop 5,000 times results in these figures. Thread Method run 1 dif.Milliseconds 125 Integer dif.Ticks 1250000 Long run 2 dif.Milliseconds 0 Integer dif.Ticks 0 Long run 3 dif.Milliseconds 0 Integer dif.Ticks 0 Long HttpContext Method run 1 dif.Milliseconds 15 Integer dif.Ticks 156250 Long run 2 dif.Milliseconds 156 Integer dif.Ticks 1562500 Long run 3 dif.Milliseconds 0 Integer dif.Ticks 0 Long So I guess what is more prefered, or more compliant with webservice standards? If there is some type of a performance advantage, I can't really tell. Which one scales to larger environments easier?

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • Faster way to split a string and count characters using R?

    - by chrisamiller
    I'm looking for a faster way to calculate GC content for DNA strings read in from a FASTA file. This boils down to taking a string and counting the number of times that the letter 'G' or 'C' appears. I also want to specify the range of characters to consider. I have a working function that is fairly slow, and it's causing a bottleneck in my code. It looks like this: ## ## count the number of GCs in the characters between start and stop ## gcCount <- function(line, st, sp){ chars = strsplit(as.character(line),"")[[1]] numGC = 0 for(j in st:sp){ ##nested ifs faster than an OR (|) construction if(chars[[j]] == "g"){ numGC <- numGC + 1 }else if(chars[[j]] == "G"){ numGC <- numGC + 1 }else if(chars[[j]] == "c"){ numGC <- numGC + 1 }else if(chars[[j]] == "C"){ numGC <- numGC + 1 } } return(numGC) } Running Rprof gives me the following output: > a = "GCCCAAAATTTTCCGGatttaagcagacataaattcgagg" > Rprof(filename="Rprof.out") > for(i in 1:500000){gcCount(a,1,40)}; > Rprof(NULL) > summaryRprof(filename="Rprof.out") self.time self.pct total.time total.pct "gcCount" 77.36 76.8 100.74 100.0 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.58 3.6 3.64 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $by.total total.time total.pct self.time self.pct "gcCount" 100.74 100.0 77.36 76.8 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.64 3.6 3.58 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $sampling.time [1] 100.74 Any advice for making this code faster?

    Read the article

  • Copying a subset of data to an empty database with the same schema

    - by user193655
    I would like to export part of a database full of data to an empty database. Both databases has the same schema. I want to maintain referential integrity. To simplify my cases it is like this: MainTable has the following fields: 1) MainID integer PK 2) Description varchar(50) 3) ForeignKey integer FK to MainID of SecondaryTable SecondaryTable has the following fields: 4) MainID integer PK (referenced by (3)) 5) AnotherDescription varchar(50) The goal I'm trying to accomplish is "export all records from MainTable using a WHERE condition", for example all records where MainID < 100. To do it manually I shuold first export all data from SecondaryTable contained in this select: select * from SecondaryTable ST outer join PrimaryTable PT on ST.MainID=PT.MainID then export the needed records from MainTable: select * from MainTable where MainID < 100. This is manual, ok. Of course my case is much much much omre complex, I have 200+ tables, so donig it manually is painful/impossible, I have many cascading FKs. Is there a way to force the copy of main table only "enforcing referntial integrity". so that my query is something like: select * from MainTable where MainID < 100 WITH "COPYING ALL FK sources" In this cases also the field (5) will be copied. ====================================================== Is there a syntax or a tool to do this? Table per table I'd like to insert conditions (like MainID <100 is only for MainTable, but I have also other tables).

    Read the article

  • How to get pixel information inside a fragment shader?

    - by user697111
    In my fragment shader I can load a texture, then do this: uniform sampler2D tex; void main(void) { vec4 color = texture2D(tex, gl_TexCoord[0].st); gl_FragColor = color; } That sets the current pixel to color value of texture. I can modify these, etc and it works well. But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a : "if currentSelf.Position() == (100,100); then color=red; else color=black?" ? I know how to set colors, but how do I get "my" location? Secondly, how do I get values from a neighbor pixel? I tried this: vec4 nextColor = texture2D(tex, gl_TexCoord[1].st); But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • Changing html content of a div before and after ajax request

    - by R27
    I am trying to change the button "ADD" (in a div) to some text/img as soon as it is clicked. And after the ajax request is processed, in the success block , I want the div to get the button back. I see the ajax request is itself not getting processed. Can someone explain whats my mistake. I just removed the jsfiddle link and pasting the script here to avoid confusion about the dependencies. JS script var ajax_load = "Please wait..."; jQuery(document).ready(function($) { $("#add_button").click(function(event){ var st = $("#add_div").html(); $("#add_div").html(ajax_load); $("#sform").validate({ errorClass: "error", submitHandler: function (form) { alert('inside submit'); $.ajax({ type: "GET", url: 'form.cgi', data: $("#sform").serialize(), success: function (msg) { alert('msg'); $("#add_div").html(st); $("#sform")[0].reset(); } }); } }); }); }); And the html piece is <form id=sform>LABEL <input id=field1 type=text> <div id="add_div"> <input type="button" value="ADD" id="add_button"> </div> </form> I have jquery.validate.min.js script included.

    Read the article

  • Handling Denormalized Schema with Eclipselink

    - by iamrohitbanga
    Hello All I have a denormalized table containing employee information. The fields are employee id, name and department name. The primary key is a composite one consisting of all three fields. An employee can belong to multiple departments. I want to read/write the objects in the table using the Eclipselink Dynamic Persistence API (which is infact a wrapper on top of JPA descriptors etc.). Example Data: 1 e1 dep1 2 e1 dep2 3 e2 dep1 4 e2 dep3 5 e3 dep1 5 e3 dep2 5 e3 dep3 A normal ReadAllQuery (select query) on the table returns a DynamicEntity corresponding to each row in the table. However I want to club all entities based on the emp id and return all the departments he belongs to as a list. I can merge the entities after retrieving them but if I can use some Eclipselink feature out of the box then it would be better. One way to do the read is the following: I create two dynamic types corresponding to employee: Having id,name as the primary key Having id, department as the primary key, I create a OneToManyMapping from the first type to the second one. Then when I query the first type it does return the departments to which employee belongs as a list of DynamicEntity of the second type. This satisfies the read scenario. Is there a better way of doing this? Is this inherently supported by Eclipselink or JPA? I cannot get the same dynamic type configuration working for the write scenario. This is because when I write the changes using the writeObject method of UnitOfWork, it generates insert queries which enter the following entries in the table id name department 102 emp_102 102 st 102 dep_102 102 dep_102 102 dep_102 instead of: id name department 102 emp_102 st 102 emp_102 dep_102 102 emp_102 dep_102 102 emp_102 dep_102 Is there any way I can get write to work with this schema using eclipselink? I want to avoid doing the heavy lifting of merging the rows for such a denormalized schema or generating each row before doing a write. Is there no clean way of doing this using Eclipselink or JPA? Thanks in Advance.

    Read the article

  • R optimization: How can I avoid a for loop in this situation?

    - by chrisamiller
    I'm trying to do a simple genomic track intersection in R, and running into major performance problems, probably related to my use of for loops. In this situation, I have pre-defined windows at intervals of 100bp and I'm trying to calculate how much of each window is covered by the annotations in mylist. Graphically, it looks something like this: 0 100 200 300 400 500 600 windows: |-----|-----|-----|-----|-----|-----| mylist: |-| |-----------| So I wrote some code to do just that, but it's fairly slow and has become a bottleneck in my code: ##window for each 100-bp segment windows <- numeric(6) ##second track mylist = vector("list") mylist[[1]] = c(1,20) mylist[[2]] = c(120,320) ##do the intersection for(i in 1:length(mylist)){ st <- floor(mylist[[i]][1]/100)+1 sp <- floor(mylist[[i]][2]/100)+1 for(j in st:sp){ b <- max((j-1)*100, mylist[[i]][1]) e <- min(j*100, mylist[[i]][2]) windows[j] <- windows[j] + e - b + 1 } } print(windows) [1] 20 81 101 21 0 0 Naturally, this is being used on data sets that are much larger than the example I provide here. Through some profiling, I can see that the bottleneck is in the for loops, but my clumsy attempt to vectorize it using *apply functions resulted in code that runs an order of magnitude more slowly. I suppose I could write something in C, but I'd like to avoid that if possible. Can anyone suggest another approach that will speed this calculation up?

    Read the article

  • Ruby - Nokogiri - Need to put node.value to an array

    - by r3nrut
    What I'm trying to do is read the value for all the nodes in this XML and put them into an array. This should be simple but for some reason it's driving me nuts. XML <ArrayOfAddress> <Address> <AddressId>297424fe-cfff-4ee1-8faa-162971d2645f</AddressId> <FirstName>George</FirstName> <LastName>Washington</LastName> <Address1>123 Main St</Address1> <Address2>Apt #611</Address2> <City>New York</City> <State>NY</State> <PostalCode>10110</PostalCode> <CountryCode>US</CountryCode> <EmailAddress>[email protected]</EmailAddress> <PhoneNumber>5555551234</PhoneNumber> <AddressType>CustomerAddress</AddressType> </Address> </ArrayOfAddress> Code class MassageRepsone def parse_resp @@get_address.url_builder #URL passed through HTTPClient - @@resp is the xml above doc = Nokogiri::XML::Reader(@@resp) @@values = doc.each do |node| node.value end end @@get_address.parse_resp obj = [@@values] Array(obj) p obj end The code snippet from above returns the following: 297424fe-cfff-4ee1-8faa-162971d2645f George Washington 123 Main St Apt #622 New York NY 10110 US test.test.com 5555551234 CustomerAddress I tried putting @@values to a string and applying chomp but that just prints the newlines as nil and puts quotes around the values. Not sure what the next step is or if I need to approach this differently with Nokogiri.

    Read the article

  • How can I [simply] consume JSON Data in a Line of Business Web Application

    - by Atomiton
    I usually use JSON with jQuery to just return a string with html. However, I want to start to use Javascript objects in my code. What's the simplest way to get started using json objects on my page? Here's a sample Ajax call ( after $(document).ready( { ... }) of course: $('#btn').click(function(event) { event.preventDefault(); var out = $('#result'); $.ajax({ url: "CustomerServices.asmx/GetCustomersByInvoiceCount", success: function(msg) { // // Iterate through the json results and spit them out to a page? // }, data: "{ 'invoiceCount' : 100 }" }); }); My WebMethod: [WebMethod(Description="Gets customers with more than n invoices")] public List<Customer> GetCustomersByInvoiceCount(int? invoiceCount) { using (dbDataContext db = new dbDataContext()) { return db.Customers.Where(c => c.InvoiceCount >= invoiceCount); } } What gets returned: {"d":[{"__type":"Customer","Account":"1116317","Name":"SOME COMPANY","Address":"UNit 1 , 392 JOHN ST. ","LastTransaction":"\/Date(1268294400000)\/","HighestBalance":13922.34},{"__type":"Customer","Account":"1116318","Name":"ANOTHER COMPANY","Address":"UNIT #345 , 392 JOHN ST. ","LastTransaction":"\/Date(1265097600000)\/","HighestBalance":549.42}]} What I'd LIKE to know, is what are people generally doing with this returned json? Do you iterate through the properties and create an html table on the fly? Is there way to "bind" JSON data using a javascript version of reflection ( something like the .Net GridView Control ) Do you throw this returned data into a Javascript Object and then do something with it? An example of what I want to achieve is to have an plain ol' html page ( on a mobile device )with a list of a Salesperson's Customers. When one of those customers are clicked, the customer id gets sent to a webservice which retrieves the customer details that are relevant to a sales person. I know the SO talent pool is quite deep so I figured you all here would be able to guide in the right direction and give me a few ideas on the best way to approach this.

    Read the article

  • Method invoked but not returning anything

    - by or azran
    i am calling this method from a touch in UITableViewCell , -(void)PassTitleandiMageByindex:(NSString *)number{ NSLog(@"index : %d",number.intValue); NSArray *objectsinDictionary = [[[NSArray alloc]init]autorelease]; objectsinDictionary = [[DataManeger sharedInstance].sortedArray objectAtIndex:number.intValue]; if ([objectsinDictionary count] > 3) { ProductLabel = [objectsinDictionary objectAtIndex:3]; globaliMageRef = [objectsinDictionary objectAtIndex:2]; [self performSegueWithIdentifier:@"infoseg" sender:self]; }else{ mQid = [objectsinDictionary objectAtIndex:1]; globaliMageRef = [objectsinDictionary objectAtIndex:2]; [mIQEngines Result:mQid]; NSLog(@"mQid : %@ Global : %@",mQid,globaliMageRef); }} Problem:- i am trying to get the same functionality programmatically by calling [self PassTitleandiMageByindex:stringNumber]; when i am calling this programmatically i can see the debug log getting in the place it should be, but nothing happens (by pressing the UITableViewCellon the screen) here is how it turns on by touch, - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath{ [tableView deselectRowAtIndexPath:indexPath animated:YES]; NSString *st = [NSString stringWithFormat:@"%d",[indexPath row]]; [self PassTitleandiMageByindex:st];} I have also tried to call the table-view delegate by :[[Tview delegate] tableView:Tview didSelectRowAtIndexPath:selectedCellIndexPath]; but, nothing happened.

    Read the article

  • jquery is getting the old values from database

    - by sansknwoledge
    hi in my jsp page i am a having a jquery area which pass the values to a servlet which returns an output of dropdownlist . then the jsp file do some updation so certain values which are in the dropdownlist should not be there while repopulating. but it is not happening. my jquery code is $("#cbocode").change(function(){ var cdid=$("#cbocode option:selected"); $.get("trnDC?caseNo=20&cdid="+cdid.text(),function(data){ $("#divinstrument").html(data); }) and the servlet code is case 20:{ //jquery call String cdid=(String) request.getParameter("cdid"); Statement st = con.createStatement(); ResultSet rs = st.executeQuery("select instrumentid from mstinstrument where codeid='" + cdid + "' and rec_Status='A' and statusid='U' and Agentid='METLAB'"); if (!rs.wasNull()){ //List data=new ArrayList(); String v="<select id=cboinstr>"; while (rs.next()) { // data.add(rs.getString("vend_code")); v += "<option>" + rs.getString("instrumentid").toString() + "</option>"; } v+="</select>"; response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.print(v); } else{ response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.print("no data found"); } where i am missing???

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >