Search Results

Search found 2628 results on 106 pages for 'se pavel'.

Page 99/106 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • Calculate pixels within a polygon

    - by DoomStone
    In an assignment for school do we need to do some image recognizing, where we have to find a path for a robot. So far have we been able to find all the polygons in the image, but now we need to generate a pixel map, that be used for an astar algorithm later. We have found a way to do this, show below, but the problem is that is very slow, as we go though each pixel and test if it is inside the polygon. So my question is, are there a way that we can generate this pixel map faster? We have a list of cordinates for the polygon private List<IntPoint> hull; The fuction "getMap" is called to get the pixel map public Point[] getMap() { List<Point> points = new List<Point>(); lock (hull) { Rectangle rect = getRectangle(); for (int x = rect.X; x <= rect.X + rect.Width; x++) { for (int y = rect.Y; y <= rect.Y + rect.Height; y++) { if (inPoly(x, y)) points.Add(new Point(x, y)); } } } return points.ToArray(); } Get Rectangle is used to limit the search, se we don't have to go thoug the whole image public Rectangle getRectangle() { int x = -1, y = -1, width = -1, height = -1; foreach (IntPoint item in hull) { if (item.X < x || x == -1) x = item.X; if (item.Y < y || y == -1) y = item.Y; if (item.X > width || width == -1) width = item.X; if (item.Y > height || height == -1) height = item.Y; } return new Rectangle(x, y, width-x, height-y); } And atlast this is how we check to see if a pixel is inside the polygon public bool inPoly(int x, int y) { int i, j = hull.Count - 1; bool oddNodes = false; for (i = 0; i < hull.Count; i++) { if (hull[i].Y < y && hull[j].Y >= y || hull[j].Y < y && hull[i].Y >= y) { try { if (hull[i].X + (y - hull[i].X) / (hull[j].X - hull[i].X) * (hull[j].X - hull[i].X) < x) { oddNodes = !oddNodes; } } catch (DivideByZeroException e) { if (0 < x) { oddNodes = !oddNodes; } } } j = i; } return oddNodes; }

    Read the article

  • Need help converting Ruby code to php code

    - by newprog
    Yesterday I posted this queston. Today I found the code which I need but written in Ruby. Some parts of code I have understood (I don't know Ruby) but there is one part that I can't. I think people who know ruby and php can help me understand this code. def do_create(image) # Clear any old info in case of a re-submit FIELDS_TO_CLEAR.each { |field| image.send(field+'=', nil) } image.save # Compose request vm_params = Hash.new # Submitting a file in ruby requires opening it and then reading the contents into the post body file = File.open(image.filename_in, "rb") # Populate the parameters and compute the signature # Normally you would do this in a subroutine - for maximum clarity all # parameters are explicitly spelled out here. vm_params["image"] = file # Contents will be read by the multipart object created below vm_params["image_checksum"] = image.image_checksum vm_params["start_job"] = 'vectorize' vm_params["image_type"] = image.image_type if image.image_type != 'none' vm_params["image_complexity"] = image.image_complexity if image.image_complexity != 'none' vm_params["image_num_colors"] = image.image_num_colors if image.image_num_colors != '' vm_params["image_colors"] = image.image_colors if image.image_colors != '' vm_params["expire_at"] = image.expire_at if image.expire_at != '' vm_params["licensee_id"] = DEVELOPER_ID #in php it's like this $vm_params["sequence_number"] = -rand(100000000);????? vm_params["sequence_number"] = Kernel.rand(1000000000) # Use a negative value to force an error when calling the test server vm_params["timestamp"] = Time.new.utc.httpdate string_to_sign = CREATE_URL + # Start out with the URL being called... #vm_params["image"].to_s + # ... don't include the file per se - use the checksum instead vm_params["image_checksum"].to_s + # ... then include all regular parameters vm_params["start_job"].to_s + vm_params["image_type"].to_s + vm_params["image_complexity"].to_s + # (nil.to_s => '', so this is fine for vm_params we don't use) vm_params["image_num_colors"].to_s + vm_params["image_colors"].to_s + vm_params["expire_at"].to_s + vm_params["licensee_id"].to_s + # ... then do all the security parameters vm_params["sequence_number"].to_s + vm_params["timestamp"].to_s vm_params["signature"] = sign(string_to_sign) #no problem # Workaround class for handling multipart posts mp = Multipart::MultipartPost.new query, headers = mp.prepare_query(vm_params) # Handles the file parameter in a special way (see /lib/multipart.rb) file.close # mp has read the contents, we can close the file now response = post_form(URI.parse(CREATE_URL), query, headers) logger.info(response.body) response_hash = ActiveSupport::JSON.decode(response.body) # Decode the JSON response string ##I have understood below def sign(string_to_sign) #logger.info("String to sign: '#{string_to_sign}'") Base64.encode64(HMAC::SHA1.digest(DEVELOPER_KEY, string_to_sign)) end # Within Multipart modul I have this: class MultipartPost BOUNDARY = 'tarsiers-rule0000' HEADER = {"Content-type" => "multipart/form-data, boundary=" + BOUNDARY + " "} def prepare_query (params) fp = [] params.each {|k,v| if v.respond_to?(:read) fp.push(FileParam.new(k, v.path, v.read)) else fp.push(Param.new(k,v)) end } query = fp.collect {|p| "--" + BOUNDARY + "\r\n" + p.to_multipart }.join("") + "--" + BOUNDARY + "--" return query, HEADER end end end Thanks for your help.

    Read the article

  • Is this postgres function cost efficient or still have to clean

    - by kiranking
    There are two tables in postgres db. english_all and english_glob First table contains words like international,confidential,booting,cooler ...etc I have written the function to get the words from english_all then perform for loop for each word to get word list which are not inserted in anglish_glob table. Word list is like I In Int Inte Inter .. b bo boo boot .. c co coo cool etc.. for some reason zwnj(zero-width non-joiner) is added during insertion to english_all table. But in function I am removing that character with regexp_replace. Postgres function for_loop_test is taking two parameter min and max based on that I am selecting words from english_all table. function code is like DECLARE inMinLength ALIAS FOR $1; inMaxLength ALIAS FOR $2; mviews RECORD; outenglishListRow english_word_list;--custom data type eng_id,english_text BEGIN FOR mviews IN SELECT id,english_all_text FROM english_all where wlength between inMinLength and inMaxLength ORDER BY english_all_text limit 30 LOOP FOR i IN 1..char_length(regexp_replace(mviews.english_all_text,'(?)$','')) LOOP FOR outenglishListRow IN SELECT distinct on (regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','')) mviews.id, regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') where regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') not in(select english_glob.english_text from english_glob where i=english_glob.wlength) order by regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') LOOP RETURN NEXT outenglishListRow; END LOOP; END LOOP; END LOOP; END; Once I get the word list I will insert that into another table english_glob. My question is is there any thing I can add to or remove from function to make it more efficient. edit Let assume english_all table have words like footer,settle,question,overflow,database,kingdom If inMinLength = 5 and inmaxLength=7 then in the outer loop footer,settle,kingdom will be selected. For above 3 words inner two loop will apply to get words like f,fo,foo,foot,foote,footer,s,se,set,sett,settl .... etc. In the final process words which are bold will be entered into english_glob with another parameter like 1 to denote it is a proper word and stored in the another filed of english_glob table. Remaining word will be stored with another parameter 0 because in the next call words which are saved in database should not be fetched again. edit2: This is a complete code CREATE TABLE english_all ( id serial NOT NULL, english_all_text text NOT NULL, wlength integer NOT NULL, CONSTRAINT english_all PRIMARY KEY (id), CONSTRAINT english_all_kan_text_uq_id UNIQUE (english_all_text) ) CREATE TABLE english_glob ( id serial NOT NULL, english_text text NOT NULL, is_prop integer default 1, CONSTRAINT english_glob PRIMARY KEY (id), CONSTRAINT english_glob_kan_text_uq_id UNIQUE (english_text) ) insert into english_all(english_text) values ('ant'),('forget'),('forgive'); on function call with parameter 3 and 6 fallowing rows should fetched a an ant f fo for forg forge forget next is insert to another table based on above row insert into english_glob(english_text,is_prop) values ('a',1),('an',1), ('ant',1),('f',0), ('fo',0),('for',1), ('forg',0),('forge',1), ('forget',1), on function call next time with parameter 3 and 7 fallowing rows should fetched.(because f,fo,for,forg are all entered in english_glob table) forgi forgiv forgive

    Read the article

  • bad file descriptor with close() socket (c++)

    - by user321246
    hi everybody! I'm running out of file descriptors when my program can't connect another host. The close() system call doesn't work, the number of open sockets increases. I can se it with cat /proc/sys/fs/file-nr Print from console: connect: No route to host close: Bad file descriptor connect: No route to host close: Bad file descriptor .. Code: #include <stdio.h> #include <stdlib.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <string.h> #include <iostream> using namespace std; #define PORT 1238 #define MESSAGE "Yow!!! Are we having fun yet?!?" #define SERVERHOST "192.168.9.101" void write_to_server (int filedes) { int nbytes; nbytes = write (filedes, MESSAGE, strlen (MESSAGE) + 1); if (nbytes < 0) { perror ("write"); } } void init_sockaddr (struct sockaddr_in *name, const char *hostname, uint16_t port) { struct hostent *hostinfo; name->sin_family = AF_INET; name->sin_port = htons (port); hostinfo = gethostbyname (hostname); if (hostinfo == NULL) { fprintf (stderr, "Unknown host %s.\n", hostname); } name->sin_addr = *(struct in_addr *) hostinfo->h_addr; } int main() { for (;;) { sleep(1); int sock; struct sockaddr_in servername; /* Create the socket. */ sock = socket (PF_INET, SOCK_STREAM, 0); if (sock < 0) { perror ("socket (client)"); } /* Connect to the server. */ init_sockaddr (&servername, SERVERHOST, PORT); if (0 > connect (sock, (struct sockaddr *) &servername, sizeof (servername))) { perror ("connect"); sock = -1; } /* Send data to the server. */ if (sock > -1) write_to_server (sock); if (close (sock) != 0) perror("close"); } return 0; }

    Read the article

  • jQuery nextUntill id < num or alternative

    - by Volmar
    Hi, i'm using jQuery to show/hide different LI-elements based on their classes. Look at this example. <li id="1" class="fp de1"></li> <li id="2" class="fp de1"><button onclick="hide(2,2);"></li> <li id="3" class="fp de2"><button onclick="hide(3,3);"></li> <li id="4" class="fp de3"><button onclick="hide(4,4);"></li> <li id="5" class="fp de4"></li> <li id="6" class="fp de3"></li> <li id="7" class="fp de3"></li> <li id="8" class="fp de1"><button onclick="hide(8,2);"></li> <li id="9" class="fp de2"><button onclick="hide(9,3);"></li> <li id="10" class="fp de3"><button onclick="hide(10,4);"></li> <li id="11" class="fp de4"></li> You se that some of these have a button with a hide funcion. what i want is that when you press the hide button The following elements the have a highernumber in the .de# class should be hidden untill it reaches a LI with the same .de#-class. so if you press the hide(), i want LIs with ids 3,4,5,6,7 to be hiden. if i press the next on i want 4,5,6,7, and the thirs i want id 5 to be hidden. so this is the Javascript i made for it: function hide(id,de){ var de2 = de-1; $('#'+id).nextUntil('li.de'+de2).hide(); } The problem is that this function is not working exactly as i want. it would work correctly in the first hide()-function and the thirs but not in hide()function number two. here it will hide IDs: 4-8. so i want to do something. so i want the nextuntill() to hide elements untill it reaches a LI-element with the same .de# or a lower .de#. i hope i didn't complicate it to much in my description of the problem. if you have better idea than using nextUntill i'm all ears.

    Read the article

  • Where do you put your unit test?

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewers?

    Read the article

  • Symfony2 Forms: is it possible to bind a form in an "unconventional way"?

    - by DonCallisto
    Imagine this scenario: in our company there is an employee that "play" around graphic,css,html and so on. Our new project will born under symfony2 so we're trying some silly - but "real" - stuff (like authentication from db, submit data from a form and persist it to db and so on..) The problem As far i know, learnt from symfony2 "book" that i found on the site (you can find it here), there is an "automated" way for creating and rendering forms: 1) Build the form up into a controller in this way $form = $this->createFormBuilder($task) ->add('task','text'), ->add('dueDate','date'), ->getForm(); return $this->render('pathToBundle:Controller:templateTwig', array('form'=>$form->createview()); 2) Into templateTwig render the template {{ form_widget(form) }} // or single rows method 3) Into a controller (the same that have a route where you can submit data), take back submitted information if($rquest->getMethod()=='POST'){ $form->bindRequest($request); /* and so on */ } Return to scenario Our graphic employee don't want to access controllers, write php and other stuff like those. So he'll write a twig template with a "unconventional" (from symfony2 point of view, but conventional from HTML point of view) method: /* into twig template */ <form action="{{ path('SestanteUserBundle_homepage') }}" method="post" name="userForm"> <div> USERNAME: <input type="text" name="user_name" value="{{ user.username}}"/> </div> <div> EMAIL: <input type="text" name="user_mail" value="{{ user.email }}"/> </div> <input type="hidden" name="user_id" value="{{ id }}" /> <input type="submit" value="modifica i dati"> </form> Now, if into the controller that handle the submission of data we do something like that public function indexAction(Request $request) { if($request->getMethod() == 'POST'){ // sono arrivato per via di un submit, quindi devo modificare i dati prima di farli vedere a video $defaultData = array('message'=>'ho visto questa cosa in esempio, ma non capisco se posso farne a meno'); $form = $this->createFormBuilder($defaultData) ->add('user_name','text') ->add('user_mail','email') ->add('user_id','integer') ->getForm(); $form->bindRequest($request); //bindo la form ad una request $data = $form->getData(); //mi aspetto un'array chiave=>valore /* .... */ We expected that $data will contain an array with key,value from the submitted form. We found that it isn't true. After googling for a while and try with other "bad" ideas, we're frozen into that. So, if you have a "graphic office" that can't handle directly php code, how can we interface from form(s) to controller(s) ? UPDATE It seems that Symfony2 use a different convention for form's field name and lookup once you've submitted that. In particular, if my form's name is addUser and a field is named userName, the field's name will be AddUser[username] so maybe it have a "dynamic" lookup method that will extract form's name, field's name, concat them and lookup for values. Is it possible?

    Read the article

  • This Android SDK requires Android Developer Toolkit version 22.0.0 or above. Current version is 21.x.x.

    - by user2626673
    Hi i have a problem with my Eclipse and the SDK (i have download and install the latest ADT Bundle for windows ) when i start my eclipse i get this problem : This Android SDK requires Android Developer Toolkit version 22.0.0 or above. Current version is 20.0.0. please update your SDK tools to the latest version i have tried the option : Help - check for updates But with no new update find then i try this one : How to Update your ADT to Latest Version In Eclipse go to Help Install New Software ---> Add inside Add Repository write the Name: ADT (or whatever you want) and Location: https://dl-ssl.google.com/android/eclipse/ after loading you should get Developer Tools and NDK Plugins check both if you want to use the Native Developer Kit (NDK) in the future or check Developer Tool only click Next Finish But i dont have the option to click next to finish (the back , next and finish options are grey ) Then i try this method : Go here download latest version of ADT-22.0.4.zip (*) At Eclipse > Help > Install new software... > Uncheck Contact all update sites during install to find required software (last bottom preference) that will avoid any unwanted delays during install. then at the same screen (top) Click Add > Archive > select downloaded ADT-X.X.X.zip > follow on screen installation steps But had the same problem when it was to finish the installation.. no option to click ''next'' then i try this one : Help – Install New Software in the ADT menu. Type https://dl-ssl.google.com/android/eclipse/site.xml in “Work with:” and Enter. You can see the “Developer Tools” item. Select it and click Next. Click Next one more. Click Finish accepting the terms of the license agreements. Click OK in the “Security Warning” window. Let the installer restart ADT after installing the tools. But and in this option have the same problem as above.. can click the ''next'' to finish http://i30.photobucket.com/albums/c316/caslor_1978/diafora/atdproblem_zps0d141b7b.jpg i check my version and it is the latest but have the problem http://i30.photobucket.com/albums/c316/caslor_1978/diafora/atdproblem2_zps81de6317.jpg How can i fix this problem ? any suggestion? Win7 / 32bit / java SE Development kit7 update 25

    Read the article

  • Write to text file using ArrayList

    - by Ugochukwutubelum Chiemenam
    The program is basically about reading from a text file, storing the current data into an ArrayList, then writing data (from user input) into the same text file. Kindly let me know where I am going wrong in this sub-part? The data inside the text file is as follows: abc t1 1900 xyz t2 1700 The compiler is showing an error at the line output.format("%s%s%s%n", public class justTesting { private Scanner input; private Formatter output; private ArrayList<Student> tk = new ArrayList<Student>(); public static void main(String[] args) { justTesting app = new justTesting(); app.create(); app.writeToFile(); } public void create() { Text entry = new Text(); Scanner input = new Scanner(System.in); System.out.printf("%s\n", "Please enter your name, ID, and year: "); while (input.hasNext()) { try { entry.setName(input.next()); entry.setTelNumber(input.next()); entry.setDOB(input.next()); for (int i = 0; i < tk.size(); i++) { output.format("%s%s%s%n", tk.get(i).getName(), tk.get(i) .getTelNumber(), tk.get(i).getDOB()); } } catch (FormatterClosedException fce) { System.err.println("Error writing to file."); return; } catch (NoSuchElementException nsee) { System.err.println("Invalid input. Try again: "); input.nextLine(); } System.out.printf("%s\n", "Please enter your name, ID, and year: "); } } public void writeToFile() { try { output = new Formatter("testing.txt"); } catch (SecurityException se) { System.err .println("You do not have write access permission to this file."); System.exit(1); } catch (FileNotFoundException fnfe) { System.err.println("Error opening or creating file."); System.exit(1); } } }

    Read the article

  • organizing unit test

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewing developers?

    Read the article

  • How to upload a file into database by using Servlet?

    - by user1765496
    Hi all iam working on servlets, so i need to upload a file by using servlet as follows my code. package com.limrasoft.image.servlets; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import javax.servlet.annotation.*; import java.sql.*; @WebServlet(name="serv1",value="/s1") public class Account extends HttpServlet{ public void doPost(HttpServletRequest req,HttpServletResponse res)throws ServletException,IOException{ try{ Class.forName("oracle.jdbc.driver.OracleDriver"); Connecection con=null; try{ con=DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:xe","system","sajid"); PrintWriter pw=res.getWriter(); res.setContentType("text/html"); String s1=req.getParameter("un"); string s2=req.getParameter("pwd"); String s3=req.getParameter("g"); String s4=req.getParameter("uf"); PreparedStatement ps=con.prepareStatement("insert into account(?,?,?,?)"); ps.setString(1,s1); ps.setString(2,s2); ps.setString(3,s3); File file=new File("+s4+"); FileInputStream fis=new FileInputStream(fis); int len=(int)file.length(); ps.setBinaryStream(4,fis,len); int c=ps.executeUpdate(); if(c==0){pw.println("<h1>Registratin fail");} else{pw.println("<h1>Registration fail");} } finally{if(con!=null)con.close();} } catch(ClassNotFoundException ce){pw.println("<h1>Registration Fail");} catch(SQLException se){pw.println("<h1>Registration Fail");} pw.flush(); pw.close(); } } I have written the above code for file upload into database, but it giving error as "HTTP Status 500 - Servlet3.java (The system cannot find the file specified)" Could you plz help me to do this code,thanks in advanse.

    Read the article

  • How to facebook getuser() after login with javascript SDK

    - by user1848205
    So I have to ask for extended permission by clicking the enter button, but after the login is necessary to refresh the page in order to display the app. Here's my code: <?php require 'facebook.php'; $facebook = new Facebook(array( 'appId' => '< THE APPID >', 'secret' => '< THE SECRET >', 'cookie' => true, )); $user = $facebook->getUser(); if ($user) { try { $user_profile = $facebook->api('/me'); } catch (FacebookApiException $e) { error_log($e); $user = null; } } ?> <body> <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId : '< THE APPID >', status : true, cookie : true, xfbml : true }); // Additional initialization code such as adding Event Listeners goes here $('#btn-enter').click(function(){ login(); }); }; (function(d){ var js, id = 'facebook-jssdk', ref = d.getElementsByTagName('script')[0]; if (d.getElementById(id)) {return;} js = d.createElement('script'); js.id = id; js.async = true; js.src = "//connect.facebook.net/en_US/all.js"; ref.parentNode.insertBefore(js, ref); }(document)); function login() { FB.login(function(response) { if (response.authResponse) { // connected } else { // cancelled } //}); }, {scope: 'read_friendlists,friends_photos,publish_stream'}); } </script> <?php if ($user): ?> <!--Here is my APP--> <?php else: ?> <a id="btn-enter">Enter</a> <?php endif ?> Is there a better way to do this ? What works for me is: function login() { FB.login(function(response) { if (response.authResponse) { top.location.href='https://the_app_url'; } else { } //}); }, {scope: 'read_friendlists,friends_photos,publish_stream'}); } But this causes the entire page to refresh and is not 'elegant' per se...

    Read the article

  • Advice needed: stay with Java team or move to C++ team?

    - by user68759
    Some background - I have been programming in Java as a professional for the last few years. This is mainly using Java SE. I have also touched bits and pieces of other various Java technologies and have some basic knowledge about them. I consider my self as an intermediate Java programmer. I like Java very much. I think it is only going to get bigger. Recently, my manager asked my opinion on whether I would like to be transferred to another team within the company that is developing a product in C++. This is mainly because my current Java team simply didn't make enough money due to poor sales and the economic downturn. Now, I have never had any experience with C++ nor have I ever coded a single line of code in C++. I have always wanted to learn it and now is my chance. But I really want to make sure I get benefit out of it in the future, in the sense that I will have the skills that will still be on-demand in the future. So, what do you experts think? Is C++ still the language to learn these days to secure yourself for the future? What will I learn more in C++ but not in Java? And are they worthy to learn considering the current and possible future demands in IT industry? (Apart from the obvious more control over memory management and something along that line.) What is a good excuse to refuse the offer in order to stay with the Java team? I don't want to blatantly refuse it because you can never predict the future and I could possibly come back to my manager in the future and ask him to transfer me to the C++ team. How do I say it nicely that I am taking the offer but I would like to still be involved with Java one way or another, such as when there is a new Java project I would like to be considered. I have to admit that I am kind of 50-50 at the moment. I want to learn C++ for the sake of improving my skills and also helping my company to reduce the fund required for the Java team. But it is also hard for me to leave Java because I know Java is going to get bigger, so I am afraid of getting behind when I start concentrating on C++. I could, of course, decide to just join the C++ team, and then spend my free time reading about Java to keep in touch with it, but I thought I would ask anyway in case some people can point out the strong points of either over the other given the current and possibly future circumstances.

    Read the article

  • Autopopulate from Select box from database

    - by Chris Spalton
    hope you can help, please forgive any poor coding or anytihng, I'm new to this and just hacking my way through to get things to work. That said, on one of my projects I have this code, which successfully populates the dropdown from a database when the page is loaded: <select name="Region" id="Region"> <option value="">-- Select Region --</option> <?php $region=$POST['Region']; if ($region); { $regionquery = "SELECT DISTINCT REGION FROM Sales_Execs "; $regionresult = mysql_query($regionquery); while($row = mysql_fetch_array($regionresult)) { echo "<option value=\"".$row['REGION']."\">".$row['REGION']."</option>\n "; } } ?> <script type="text/javascript"> document.getElementById('Region').value = <?php echo json_encode(trim($_POST['Region']));?>; </script> </select> On my next project that I'm working on now, I need to do the same thing, so I copied the above code amended, and placed in my new project: <select name="Sales_Exec" id="Sales_Exec"> <option value="">-- Select SE --</option> <?php $salesexec=$POST['Sales_Exec']; if ($salesexec); { $salesexecquery = "SELECT DISTINCT Assigned FROM Data "; $salesexecresult = mysql_query($salesexecquery); while($row = mysql_fetch_array($salesexecresult)) { echo "<option value=\"".$row['ASSIGNED']."\">".$row['ASSIGNED']."</option>\n "; } } ?> <script type="text/javascript"> document.getElementById('Sales_Exec').value = <?php echo json_encode(trim($_POST['Sales_Exec']));?>; </script> </select> This second chunk of code doesn't work... and I can't work out why as it seems I've copied it all and amended all the neccersary parts, can anyone spot what is wrong? Thankyou!

    Read the article

  • How can I switch an existing set of Subversion repositories to use ActiveDirectory?

    - by jpierson
    I have a set of private Subversion repositories on a Windows Server 2003 box which developers access via SVNServe over the svn:// protocol. Currently we have been using the authz and passwd files for each repository to control access however with the growing number of repositories and developers I'm considering switching to using their credentials from ActiveDirectory. We run in an all Microsoft shop and use IIS instead of Apache on all of our web servers so I would prefer to continue to use SVNServe if possible. Besides it being possible, I'm also concerned about how to migrate our repositories so that the history for the existing users map to the correct ActiveDirectory accounts. Keep in mind also that I'm not the network administrator and I'm not terrible familiar with ActiveDirectory so I'll probably have to go through some other people to get the changes made in ActiveDirectory if necessary. What are my options? UPDATE 1: It appears from the SVN documentation that by using SASL I should be able to get SVNServe to authenticate using ActiveDirectory. To clarify, the answer that I'm looking for is how to go about configuring SVNServe (if possible) to use ActiveDirectory for authentication and then how to modify an existing repository to remap existing svn users to their ActiveDirectory domain login accounts. UPDATE 2: It appears that the SASL support in SVNServe works off of a plugin model and the documentation only shows as an example. Looking at the Cyrus SASL Library it looks like a number of authentication "mechanisms" are supported but I'm not sure which one is to be used for ActiveDirectory support nor can I find any documentation about such matters. UPDATE 3: Ok, well it looks like in order to communication with ActiveDirectory I'm looking to use saslauthd instead of sasldb for the *auxprop_plugin* property. Unfortunately it appears that according to some posts (possibly outdated and inaccurate) saslauthd does not build on Windows and such endeavors are considered a work in progress. UPDATE 4: The lastest post I've found on this topic makes it sound as though the proper binaries () are available through the MIT Kerberos Library but it sounds like the author of this post on Nabble.com is still having issues getting things working. UPDATE 5: It looks like from the TortoiseSVN discussions and also this post on svn.haxx.se that even if saslgssapi.dll or whatever necessary binaries are available and configured on the Windows server that the clients will also need the same customization in order to work with these repositories. If this is true, we will only be able to get ActiveDirectory support from a windows client only if changes are made in these clients such as TortoiseSVN and CollabNet build of the client binaries to support such authentication schemes. Although thats what these posts suggest, this is contradictory from what I originally assumed from other reading in that being SASL compatible should require no changes on the client but instead only that the server be setup to handle the authentication mechanism. After reading a bit more carefully in the document about Cyrus SASL in Subversion section 5 states "1.5+ clients with Cyrus SASL support will be able to authenticate against 1.5+ servers with SASL enabled, provided at least one of the mechanisms supported by the server is also supported by the client." So clearly GSSAPI support (which I understand is required for Active Directory) must be available within the client and the server. I have to say, I'm learning way too much about the internals of how Subversion handles authentication than I ever wanted to and I juts simply want to get an answer about whether I can have Active Directory authentication support when using SVNServe on a Windows server and accessing this from Windows clients. According to the official documentation it seems that this is possible however you can see that the configuration is not trivial if even possible at all.

    Read the article

  • How do I achieve lossless JPEG joining without truncation of partial MCUs?

    - by Karan
    I am working on a project for which I need to join thousands of JPEG images losslessly (I'm not talking about the Lossless JPEG/JPEG 2000/JPEG-LS formats here). Aforementioned images have varying levels of chroma subsampling (1x1, 1x2, 2x1, 2x2), resulting in varying MCU sizes (8x8, 8x16, 16x8, 16x16 px). However, in any given set of images to be joined together, each image has identical characteristics. For now, let's assume I only have 2 images. Image #1 (I1) is 256x256px in size and #2 (I2) is 239x256px in size. 2x2 subsampling is used such that MCU size is 16x16px. I2 thus obviously has partial MCUs at the right edge, since its width is not evenly divisible by 16. (I've read that so-called 'partial' MCUs actually contain the data for a complete MCU, but the image dimensions instruct the renderer to only display the relevant pixels and ignore/hide the extra ones.) Looking around for tools that could help me accomplish this, I came across a modified version of JpegTran, that contains an experimental lossless crop 'n' drop (cut & paste) feature. All the other apps I encountered that support lossless JPEG editing seem to utilise IJG's (JpegTran) code, so this seemed to be the logical choice. Also, given the sheer number of images, I wanted something that could preferably be run from the command-line so that I could automate the process with a script. Unfortunately, while everything else worked fine, it seems JpegTran truncates the partial MCUs instead of retaining them. Thus in the example above, the final joined image contains all of I1, but only 224x256px of I2. Why 224? because 239 = 14x16+15, which means there are 14 full MCUs along the width, and 1 partial MCU (just 1px short of the complete 16px). The last 15px is what is getting blanked, leading to a 495x256px image with 15px of blank (grey) pixels at the right edge. See images below (shame that imgur re-compresses them): (left )+ (right) = As you can clearly see, the red portion (15px) of I2 has been truncated by JpegTran. If the MCUs were 8px in width, the lost portion would have been the right-most 7px of I2. Similarly, joining I3 (256x239px) *below * I1 would cause the loss of 7 or 15px, depending on the MCU height of course: (top) + (bottom) = If this is better suited to some other StackExchange (or even non-SE) site/forum where JPEG/image encoding experts hang out, do let me know. Can what I am attempting even be done, or is the so-called 'lossless' JPEG crop 'n' drop only valid for images with no partial MCUs? (Maybe that is why the feature is still in an "experimental state" more than a decade after being introduced...) Until I know for sure that it is impossible, I am not interested in suggestions for lossy joining. Avoiding any generational loss whatsoever is the sole reason why I'm breaking my head over this, else I'd have had this done and dusted ages ago. Also, I am not interested in suggestions related to switching image formats. I do not control the source of the images. If it can be done, how? Please keep in mind that any alternate apps suggested must ideally be capable of automation, given the requirements stated above. (But given how it's unlikely I'm even going to receive a useful answer given the constraints, I would be happy with any app suggestion just as long as it actually works. I can always look into an AutoIT/AHK script or something later to automate it.) I understand that an odd-sized final image might cause issues, so I am fully prepared to accept any solution, even if it results in blank (preferably black) padding pixels to the right/bottom. What I mean is, I don't care if I1 + I2 is 496x256px (1px padding) or even 512x256px (17px padding) in size, as long as the final image contains all the actual image data from both source images, and the entire process is lossless. Obviously the lesser the padding (if any), the better, but at this point any solution will do. A Windows-based solution would be perfect, but a Linux-based one would be entirely acceptable.

    Read the article

  • dvd drive I/O error?

    - by bobby
    i get dis error evry time i burn a cd/dvd thru my dvd drive...!! Nero Burning ROM bobby 4C85-200E-4005-0004-0000-7660-0800-35X3-0000-407M-MX37-**** (*) Windows XP 6.1 IA32 WinAspi: - NT-SPTI used Nero Version: 7.11.3. Internal Version: 7, 11, 3, (Nero Express) Recorder: Version: UL01 - HA 1 TA 1 - 7.11.3.0 Adapter driver: HA 1 Drive buffer : 2048kB Bus Type : default CD-ROM: Version: 52PP - HA 1 TA 0 - 7.11.3.0 Adapter driver: HA 1 === Scsi-Device-Map === === CDRom-Device-Map === ATAPI-CD ROM-DRIVE-52MAX F: CdRom0 HL-DT-ST DVDRAM GSA-H12N G: CdRom1 ======================= AutoRun : 1 Excluded drive IDs: WriteBufferSize: 83886080 (0) Byte BUFE : 0 Physical memory : 958MB (981560kB) Free physical memory: 309MB (317024kB) Memory in use : 67 % Uncached PFiles: 0x0 Use Inquiry : 1 Global Bus Type: default (0) Check supported media : Disabled (0) 11.6.2010 CD Image 10:43:02 AM #1 Text 0 File SCSIPTICommands.cpp, Line 450 LockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL 10:43:02 AM #2 Text 0 File Burncd.cpp, Line 3186 HL-DT-ST DVDRAM GSA-H12N Buffer underrun protection activated 10:43:02 AM #3 Text 0 File Burncd.cpp, Line 3500 Turn on Disc-At-Once, using CD-R/RW media 10:43:02 AM #4 Text 0 File DlgWaitCD.cpp, Line 307 Last possible write address on media: 359848 ( 79:59.73) Last address to be written: 318783 ( 70:52.33) 10:43:02 AM #5 Text 0 File DlgWaitCD.cpp, Line 319 Write in overburning mode: NO (enabled: CD) 10:43:02 AM #6 Text 0 File DlgWaitCD.cpp, Line 2988 Recorder: HL-DT-ST DVDRAM G SA-H12N; CDR co de: 00 97 27 18; O SJ entry from: Pla smon Data systems Ltd. ATIP Data: Special Info [hex] 1: D0 00 A0, 2: 61 1B 12 (LI 97:27.18), 3: 4F 3B 4A ( LO 79:59.74) Additional Info [hex] 1: 00 00 00 (invalid), 2: 00 00 00 (invalid), 3: 00 0 0 00 (invalid) 10:43:02 AM #7 Text 0 File DlgWaitCD.cpp, Line 493 Protocol of DlgWaitCD activities: TRM_DATA_MODE1, 2048, config 0, wanted index0 0 blocks, length 318784 blo cks [G: HL-DT-ST DVDRAM GSA-H12N] -------------------------------------------------------------- 10:43:02 AM #9 Text 0 File ThreadedTransferInterface.cpp, Line 986 Prepare [G: HL-DT-ST DVDRAM GSA-H12N] for write in CUE-sheet-DAO DAO infos: ========== MCN: "" TOCType: 0x00; Se ssion Clo sed, disc fixated Tracks 1 to 1: Idx 0 Idx 1 Next T rk 1: TRM_DATA_MODE1, 2048/0x00, FilePos 0 307200 6531768 32, ISRC "" DAO layout: =========== ___Start_|____Track_|_Idx_|_CtrlAdr_|_____Size_|______NWA_|_RecDep__________ -150 | lead-in | 0 | 0x41 | 0 | 0 | 0x00 -150 | 1 | 0 | 0x41 | 0 | 0 | 0x00 0 | 1 | 1 | 0x41 | 318784 | 318784 | 0x00 318784 | lead-out | 1 | 0x41 | 0 | 0 | 0x00 10:43:02 AM #10 Text 0 File SCSIPTICommands.cpp, Line 240 SPTILockVolume - completed successfully for FSCTL_LOCK_VOLUME 10:43:02 AM #11 Text 0 File Burncd.cpp, Line 4286 Caching options: cache CDRom or Network-Yes, small files-Yes ( ON 10:43:03 AM #19 Text 0 File MMC.cpp, Line 18034 CueData, Len=32 41 00 00 14 00 00 00 00 41 01 00 10 00 00 00 00 41 01 01 10 00 00 02 00 41 aa 01 14 00 46 34 22 10:43:03 AM #20 Text 0 File ThreadedTransfer.cpp, Line 268 Pipe memory size 83836800 10:43:16 AM #21 Text 0 File Cdrdrv.cpp, Line 1405 10:43:16.806 - G: HL-DT-ST DVDRAM GSA-H12N : Queue again later 10:43:42 AM #22 SPTI -1502 File SCSIPassThrough.cpp, Line 181 CdRom1: SCSIStatus(x02) WinError(0) NeroError(-1502) Sense Key: 0x04 (KEY_HARDWARE_ERROR) Nero Report 2 Nero Burning ROM Sense Code: 0x08 Sense Qual: 0x03 CDB Data: 0x2A 00 00 00 4D 00 00 00 20 00 00 00 Sense Area: 0x70 00 04 00 00 00 00 10 53 29 A1 80 08 03 Buffer x0c7d9a40: Len x10000 0xDC 87 EB 41 6E AC 61 5A 07 B2 DB 78 B5 D4 D9 24 0x8D BC 51 38 46 56 0F EE 16 15 5C 5B E3 B0 10 16 0x14 B1 C3 6E 30 2B C4 78 15 AB D5 92 09 B7 81 23 10:43:42 AM #23 CDR -1502 File Writer.cpp, Line 306 DMA-driver error, CRC error G: HL-DT-ST DVDRAM GSA-H12N 10:43:55 AM #24 Phase 38 File dlgbrnst.cpp, Line 1767 Burn process failed at 48x (7,200 KB/s) 10:43:55 AM #25 Text 0 File SCSIPTICommands.cpp, Line 287 SPTIDismountVolume - completed successfully for FSCTL_DISMOUNT_VOLUME 10:44:01 AM #26 Text 0 File Cdrdrv.cpp, Line 11412 DriveLocker: UnLockVolume completed 10:44:01 AM #27 Text 0 File SCSIPTICommands.cpp, Line 450 UnLockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL Existing drivers: Registry Keys: HKLM\Software\Microsoft\Windows NT\CurrentVersion\WinLogon Nero Report 3

    Read the article

  • What do these messages in the Qmail maillog indicate?

    - by Griffo
    There seems to be an endless supply of messages in the Qmail maillog for a single address. Can anyone shed some light on why this might be and whether it is a problem? To me it looks like either spam or some sort of unhandled problem. It strikes me as unusual that the 'from=' field is blank. This is on a VPS using Plesk in case that's important. Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23593]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: [email protected] EDIT Here's a sample of one of the emails: Received: (qmail 5603 invoked for bounce); 29 Jun 2011 07:46:31 +0100 Date: 29 Jun 2011 07:46:31 +0100 From: [email protected] To: [email protected] Subject: failure notice Hi. This is the qmail-send program at vps-1001108-595.cp.blacknight.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 200.147.36.13 does not like recipient. Remote host said: 450 4.7.1 Client host rejected: cannot find your hostname, [78.153.208.195] Giving up on 200.147.36.13. I'm not going to try again; this message has been in the queue too long. --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15585 invoked by uid 48); 22 Jun 2011 07:38:26 +0100 Date: 22 Jun 2011 07:38:26 +0100 Message-ID: <[email protected]> To: [email protected] Subject: Cadastre-se e Concorra ? um Carro! MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Cielo Fidelidade <[email protected]> <!DOCTYPE HTML> <html> ... <body text removed> <html> If I understand this correctly, this is saying that an email sent by my server, from address [email protected], could not be delivered. However, [email protected] is not a valid email address on my server, so how can email be sent from this address on my server? I have tested whether my server is acting as an open relay, and it isn't. So how else could this be happening? I am getting thousands of these every day. What can I do to prevent it?

    Read the article

  • Troubleshooting High Load on Plesk LAMP Dedicated Server

    - by Callmeed
    I have 2 nearly identical dedicated servers with the same provider. They also run a nearly identical software stack: RedHat 5 64-bit, Plesk, PHP, Apache, & MySQL. We use them for hosting custom sites we build. The problem is, while our 1st server has a load average (in top) of around 0.3, the 2nd server consistently has a load average of around 4.0 or higher. Basic functions in Plesk are delayed and there is a bit of latency when executing shell commands. Anyone have ideas why it would be so high? And why it would differ from our other server so much? Here is my current top output (sorted by %MEM) ... Any help is much appreciated ... top - 21:48:04 up 100 days, 4:28, 1 user, load average: 3.74, 4.20, 4.23 Tasks: 336 total, 1 running, 335 sleeping, 0 stopped, 0 zombie Cpu(s): 0.8%us, 0.4%sy, 0.0%ni, 91.3%id, 7.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 12290884k total, 11886452k used, 404432k free, 2920212k buffers Swap: 2096472k total, 244k used, 2096228k free, 6560692k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22536 apache 15 0 860m 547m 6484 S 0.0 4.6 0:10.96 httpd 26467 apache 15 0 859m 546m 6408 S 0.0 4.5 0:07.67 httpd 3620 apache 15 0 859m 545m 5552 S 0.0 4.5 0:06.15 httpd 1895 apache 15 0 858m 544m 6356 S 0.0 4.5 0:08.25 httpd 16933 apache 15 0 858m 544m 5488 S 0.0 4.5 0:01.57 httpd 6431 apache 15 0 856m 542m 6076 S 10.6 4.5 0:05.32 httpd 14417 apache 15 0 856m 542m 5568 S 0.0 4.5 0:03.88 httpd 15403 apache 15 0 855m 541m 5616 S 0.0 4.5 0:03.73 httpd 19165 apache 15 0 853m 539m 6252 S 0.0 4.5 0:12.40 httpd 15898 apache 15 0 852m 539m 5376 S 0.0 4.5 0:02.68 httpd 14401 apache 15 0 851m 538m 5460 S 0.0 4.5 0:02.97 httpd 15393 apache 15 0 851m 538m 5404 S 0.0 4.5 0:03.12 httpd 15427 apache 15 0 851m 538m 5496 S 0.0 4.5 0:02.44 httpd 14412 apache 15 0 851m 538m 5324 S 0.0 4.5 0:02.15 httpd 18330 apache 15 0 851m 537m 5136 S 0.0 4.5 0:01.30 httpd 18303 apache 15 0 848m 535m 5140 S 0.0 4.5 0:00.47 httpd 21190 apache 15 0 845m 533m 3988 S 0.0 4.4 0:00.33 httpd 15923 root 18 0 822m 521m 9928 S 0.0 4.3 10:04.81 httpd 22021 apache 15 0 828m 520m 4964 S 0.0 4.3 0:00.16 httpd 22146 apache 15 0 823m 515m 3016 S 0.0 4.3 0:00.02 httpd 22345 apache 15 0 822m 514m 2408 S 0.0 4.3 0:00.00 httpd 14721 apache 15 0 733m 510m 488 S 0.0 4.3 0:00.00 httpd 5094 root 15 0 1452m 122m 15m S 1.0 1.0 852:24.24 java 4636 mysql 15 0 532m 57m 6440 S 1.0 0.5 488:05.84 mysqld 4799 popuser 15 0 166m 53m 2368 S 0.0 0.4 0:36.64 spamd 16761 popuser 15 0 159m 46m 2312 S 0.0 0.4 0:00.38 spamd 4797 root 15 0 158m 45m 2448 S 0.0 0.4 0:01.27 spamd 5074 root 34 19 255m 20m 2144 S 0.0 0.2 1:37.53 yum-updatesd 9917 named 15 0 366m 9804 1980 S 0.0 0.1 0:10.26 named 4332 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.06 sw-engine-cgi 4341 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.07 sw-engine-cgi 4350 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.09 sw-engine-cgi 4352 sso 18 0 119m 8028 5212 S 0.0 0.1 0:00.11 sw-engine-cgi 4376 ntp 15 0 23388 5020 3896 S 0.0 0.0 0:00.58 ntpd 4331 sw-cp-se 15 0 61336 4572 1480 S 0.0 0.0 5:53.22 sw-cp-serverd 4213 haldaemo 15 0 31252 4460 1684 S 0.0 0.0 0:01.52 hald 4778 postgres 18 0 117m 4164 3484 S 0.0 0.0 0:00.11 postmaster 18555 root 16 0 98.3m 3716 2852 S 0.0 0.0 0:00.01 sshd 4488 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4489 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4492 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4493 sso 18 0 119m 3044 224 S 0.0 0.0 0:00.00 sw-engine-cgi 4490 sso 18 0 119m 3040 220 S 0.0 0.0 0:00.00 sw-engine-cgi

    Read the article

  • Installing XP through USB-flash disc

    - by Crazy Buddy
    I don't know whether this could be asked here... So, Pardon me for this. Probably, this is based on My laptop and a contradiction to this question asked already here... I tried to format my "government-provided" laptop (No CD-drive). I thought those IT guys are proving that they're too smart..! I have the Windows XP CD right now. I didn't like to stick with some home-made OS from our Government. So, I used another laptop to format the govt. thing and tried to install XP (As I didn't have enough bills to invest on Windows 7 or 8). Case 1: First, I allowed WinSetupFromUSB 1.0 beta 8 to deal with the flash disk. I wondered for the first time that XP text-screen appeared. Using the first part, I formatted my laptop. It started to copy files, entered into the next part, and completed the installation. I started my PC for the first time. XP splash screen appeared. Suddenly, a blue screen flashed and disappeared (I can't even read what it says). Rebooted and arrived at the screen, "Start Windows Normally". It happens and happens still - like an infinite loop :-) Case 2: Next, I used Rufus 1.2.0 to transfer files to my Flash and it screwed everything out. Even if I used Flash to boot, it arrives to the same screen "Start Windows normally". It doesn't show any response of Flash being inserted. Then I recognized that, It's simply copies everything to the flash disk. Case 3: Then, I started with Novicorp WinToFlash (giving utmost priority to this site). I booted with the disk. I entered into the first part - "Text mode". Some lines started running like that "Press F6 if you..." like that. The last thing I saw was, "Setup is starting Windows..." Suddenly a blue screen appeared like this captured one. I've a suspicion that the same screen appears again & again in first case. Man, I'm dead. Case 4: For the sake of my last hope, I used WinSetupFromUSB 0.1.1. I was shocked on arriving at a screen which says something "GRUB4DOS" like that and some commands like {command line, reboot, halt, \find menu.lst} and when I go inside those "find" options, I see "Error:15 - File not found". Googling provided some commands to mount SETUPLDR.BIN file in the "grub" thing which also proved unsuccessful... Some sites say that Factory reset uses only some function keys. A guy said that it's F11 for lenovo. Screw him. It's all a waste-of-time. But, I think SE would help me out. Is our government IT guys doin' this to me? Are they Soooo smart to spark some blue screen in front of me to freak me out? Any suggestions or new (useful) USB transferring things would be appreciated. It's very urgent. So, It'd be better if you guys pay some attention in debugging and help me out..? Thanks for your time guys :-)

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • Integrate SharePoint 2010 with Team Foundation Server 2010

    - by Martin Hinshelwood
    Our client is using a brand new shiny installation of SharePoint 2010, so we need to integrate our upgraded Team Foundation Server 2010 instance into it. In order to do that you need to run the Team Foundation Server 2010 install on the SharePoint 2010 server and choose to install only the “Extensions for SharePoint Products and Technologies”. We want out upgraded Team Project Collection to create any new portal in this SharePoint 2010 server farm. There a number of goodies above and beyond a solution file that requires the install, with the main one being the TFS2010 client API. These goodies allow proper integration with the creation and viewing of Work Items from SharePoint a new feature with TFS 2010. This works in both SharePoint 2007 and SharePoint 2010 with the level of integration dependant on the version of SharePoint that you are running. There are three levels of integration with “SharePoint Services 3.0” or “SharePoint Foundation 2010” being the lowest. This level only offers reporting services framed integration for reporting along with Work Item Integration and document management. The highest is Microsoft Office SharePoint Services (MOSS) Enterprise with Excel Services integration providing some lovely dashboards. Figure: Dashboards take the guessing out of Project Planning and estimation. Plus writing these reports would be boring!   The Extensions that you need are on the same installation media as the main TFS install and the only difference is the options you pick during the install. Figure: Installing the TFS 2010 Extensions for SharePoint Products and Technologies onto SharePoint 2010   Annoyingly you may need to reboot a couple of times, but on this server the process was MUCH smother than on our internal server. I think this was mostly to do with this being a clean install. Once it is installed you need to run the configuration. This will add all of the Solution and Templates that are needed for SharePoint to work properly with TFS. Figure: This is where all the TFS 2010 goodies are added to your SharePoint 2010 server and the TFS 2010 object model is installed.   Figure: All done, you have everything installed, but you still need to configure it Now that we have the TFS 2010 SharePoint Extensions installed on our SharePoint 2010 server we need to configure them both so that they will talk happily to each other. Configuring the SharePoint 2010 Managed path for Team Foundation Server 2010 In order for TFS to automatically create your project portals you need a wildcard managed path setup. This is where TFS will create the portal during the creation of a new Team project. To find the managed paths page for any application you need to first select the “Managed web applications”  link from the SharePoint 2010 Central Administration screen. Figure: Find the “Manage web applications” link under the “Application Management” section. On you are there you will see that the “Managed Paths” are there, they are just greyed out and selecting one of the applications will enable it to be clicked. Figure: You need to select an application for the SharePoint 2010 ribbon to activate.   Figure: You need to select an application before you can get to the Managed Paths for that application. Now we need to add a managed path for TFS 2010 to create its portals under. I have gone for the obvious option of just calling the managed path “TFS02” as the TFS 2010 server is the second TFS server that the client has installed, TFS 2008 being the first. This links the location to the server name, and as you can’t have two projects of the same name in two separate project collections there is unlikely to be any conflicts. Figure: Add a “tfs02” wildcard inclusion path to your SharePoint site. Configure the Team Foundation Server 2010 connection to SharePoint 2010 In order to have you new TFS 2010 Server talk to and create sites in SharePoint 2010 you need to tell the TFS server where to put them. As this TFS 2010 server was installed in out-of-the-box mode it has a SharePoint Services 3.0 (the free one) server running on the same box. But we want to change that so we can use the external SharePoint 2010 instance. Just open the “Team Foundation Server Administration Console” and navigate to the “SharePoint Web Applications” section. Here you click “Add” and enter the details for the Managed path we just created. Figure: If you have special permissions on your SharePoint you may need to add accounts to the “Service Accounts” section.    Before we can se this new SharePoint 2010 instance to be the default for our upgraded Team Project Collection we need to configure SharePoint to take instructions from our TFS server. Configure SharePoint 2010 to connect to Team Foundation Server 2010 On your SharePoint 2010 server open the Team Foundation Server Administration Console and select the “Extensions for SharePoint Products and Technologies” node. Here we need to “grant access” for our TFS 2010 server to create sites. Click the “Grant access” link and  fill out the full URL to the  TFS server, for example http://servername.domain.com:8080/tfs, and if need be restrict the path that TFS sites can be created on. Remember that when the users create a new team project they can change the default and point it anywhere they like as long as it is an authorised SharePoint location. Figure: Grant access for your TFS 2010 server to create sites in SharePoint 2010 Now that we have an authorised location for our team project portals to be created we need to tell our Team Project Collection that this is where it should stick sites by default for any new Team Projects created. Configure the Team Foundation Server 2010 Team Project Collection to create new sites in SharePoint 2010 Back on out TFS 2010 server we need to setup the defaults for our upgraded Team Project Collection to the new SharePoint 2010 integration we have just set up. On the TFS 2010 server open up the “Team Foundation Server Administration Console” again and navigate to the “Team Project Collections” node. Once you are there you will see a list of all of your TPC’s and in our case we have a DefaultCollection as well as out named and Upgraded collection for TFS 2008. If you select the “SharePoint Site” tab we can see that it is not currently configured. Figure: Our new Upgrade TFS2008 Team Project Collection does not have SharePoint configured Select to “Edit Default Site Location” and select the new integration point that we just set up for SharePoint 2010. Once you have selected the “SharePoint Web Application” (the thing we just configured) then it will give you an example based on that configuration point and the name of the Team Project Collection that we are configuring. Figure: Set the default location for new Team Project Portals to be created for this Team Project Collection This is where the reason for configuring the Extensions on the SharePoint 2010 server before doing this last bit becomes apparent. TFS 2010 is going to create a site at our http://sharepointserver/tfs02/ location called http://sharepointserver/tfs02/[TeamProjectCollection], or whatever we had specified, and it would have had difficulty doing this if we had not given it permission first. Figure: If there is no Team Project Collection site at this location the TFS 2010 server is going to create one This will create a nice Team Project Collection parent site to contain the Portals for any new Team Projects that are created. It is with noting that it will not create portals for existing Team Projects as this process is run during the Team Project Creation wizard. Figure: Just a basic parent site to host all of your new Team Project Portals as sub sites   You will need to add all of the users that will be creating Team Projects to be Administrators of this site so that they will not get an error during the Project Creation Wizard. You may also want to customise this as a proper portal to your projects if you are going to be having lots of them, but it is really just a default placeholder so you have a top level site that you can backup and point at. You have now integrated SharePoint 2010 and team Foundation Server 2010! You can now go forth and multiple your Team Projects for this Team Project Collection or you can continue to add portals to your other Collections.   Technorati Tags: TFS 2010,Sharepoint 2010,VS ALM

    Read the article

  • Creating an AJAX Accordion Menu

    - by jaullo
    Introduction Ajax is a powerful addition to asp.net that provides new functionality in a simple and agile  way This post is dedicated to creating a menu with ajax accordion type. About the Control The basic idea of this control, is to provide a serie of panels and show and hide information inside these panels. The use is very simple, we have to set each panel inside accordion control and give to each panel a Header and of course, we have to set the content of each panel.  To use accordion control, u need the ajax control toolkit. know the basic propertyes of accordion control:  Before start developing an accordion control, we have to know the basic properties for this control Other accordion propertyes  FramesPerSecond - Number of frames per second used in the transition animations RequireOpenedPane - Prevent closing the currently opened pane when its header is clicked (which ensures one pane is always open). The default value is true. SuppressHeaderPostbacks - Prevent the client-side click handlers of elements inside a header from firing (this is especially useful when you want to include hyperlinks in your headers for accessibility) DataSource - The data source to use. DataBind() must be called. DataSourceID - The ID of the data source to use. DataMember - The member to bind to when using a DataSourceID  AJAX Accordion Control Extender DataSource  The Accordion Control extender of AJAX Control toolkit can also be used as DataBound control. You can bind the data retrieved from the database to the Accordion control. Accordion Control consists of properties such as DataSource and DataSourceID (we can se it above) that can be used to bind the data. HeaderTemplate can used to display the header or title for the pane generated by the Accordion control, a click on which will open or close the ContentTemplate generated by binding the data with Accordion extender. When DataSource is passed to the Accordion control, also use the DataBind method to bind the data. The Accordion control bound with data auto generates the expand/collapse panes along with their headers.  This code represents the basic steps to bind the Accordion to a Datasource Collapse Public Sub getCategories() Dim sqlConn As New SqlConnection(conString) sqlConn.Open() Dim sqlSelect As New SqlCommand("SELECT * FROM Categories", sqlConn) sqlSelect.CommandType = System.Data.CommandType.Text Dim sqlAdapter As New SqlDataAdapter(sqlSelect) Dim myDataset As New DataSet() sqlAdapter.Fill(myDataset) sqlConn.Close() Accordion1.DataSource = myDataset.Tables(0).DefaultView Accordion1.DataBind()End Sub Protected Sub Accordion1_ItemDataBound(sender As Object, _ e As AjaxControlToolkit.AccordionItemEventArgs) If e.ItemType = AjaxControlToolkit.AccordionItemType.Content Then Dim sqlConn As New SqlConnection(conString) sqlConn.Open() Dim sqlSelect As New SqlCommand("SELECT productName " & _ "FROM Products where categoryID = '" + _ DirectCast(e.AccordionItem.FindControl("txt_categoryID"),_ HiddenField).Value + "'", sqlConn) sqlSelect.CommandType = System.Data.CommandType.Text Dim sqlAdapter As New SqlDataAdapter(sqlSelect) Dim myDataset As New DataSet() sqlAdapter.Fill(myDataset) sqlConn.Close() Dim grd As New GridView() grd = DirectCast(e.AccordionItem.FindControl("GridView1"), GridView) grd.DataSource = myDataset grd.DataBind() End If End Sub In the above code, we made two things, first, we made a sql select to database to retrieve all data from categories table, this data will be used to set the header and columns of the accordion.  Collapse <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <ajaxToolkit:Accordion ID="Accordion1" runat="server" TransitionDuration="100" FramesPerSecond="200" FadeTransitions="true" RequireOpenedPane="false" OnItemDataBound="Accordion1_ItemDataBound" ContentCssClass="acc-content" HeaderCssClass="acc-header" HeaderSelectedCssClass="acc-selected"> <HeaderTemplate> <%#DataBinder.Eval(Container.DataItem,"categoryName") %> </HeaderTemplate> <ContentTemplate> <asp:HiddenField ID="txt_categoryID" runat="server" Value='<%#DataBinder.Eval(Container.DataItem,"categoryID") %>' /> <asp:GridView ID="GridView1" runat="server" RowStyle-BackColor="#ededed" RowStyle-HorizontalAlign="Left" AutoGenerateColumns="false" GridLines="None" CellPadding="2" CellSpacing="2" Width="300px"> <Columns> <asp:TemplateField HeaderStyle-HorizontalAlign="Left" HeaderText="Product Name" HeaderStyle-BackColor="#d1d1d1" HeaderStyle-ForeColor="#777777"> <ItemTemplate> <%#DataBinder.Eval(Container.DataItem,"productName") %> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </ContentTemplate> </ajaxToolkit:Accordion>  Here, we use <%#DataBinder.Eval(Container.DataItem,"categoryName") %> to bind accordion header with categoryName, so we made on header for each element found on database.    Creating a basic accordion control As we know, to use any of the ajax components, there must be a registered ScriptManager on our site, which will be responsible for managing our controls. So the first thing we will do is create our script manager.     Collapse <asp:ScriptManager ID="ScriptManager1" runat="server"></asp:ScriptManager> Then we define our accordion  element and establish some basic properties:    Collapse <cc1:Accordion ID="AccordionCtrl" runat="server" SelectedIndex="0" HeaderCssClass="accordionHeader" ContentCssClass="accordionContent" AutoSize="None" FadeTransitions="true" TransitionDuration="250" FramesPerSecond="40" For our work we must declare PANES accordion inside it, these breads will be responsible for contain information, links or information that we want to show.  Collapse <Panes> <cc1:AccordionPane ID="AccordionPane0" runat="server"> <Header>Matenimiento</Header> <Content> <li><a href="mypagina.aspx">My página de prueba</a></li> </Content> </cc1:AccordionPane> To end this work, we have to close all panels and our accordion Collapse </Panes> </cc1:Accordion> Finally complete our example should look like:  Collapse <asp:ScriptManager ID="ScriptManager1" runat="server"></asp:ScriptManager> <cc1:Accordion ID="AccordionCtrl" runat="server" SelectedIndex="0" HeaderCssClass="accordionHeader" ContentCssClass="accordionContent" AutoSize="None" FadeTransitions="true" TransitionDuration="250" FramesPerSecond="40"> <Panes> <cc1:AccordionPane ID="AccordionPane0" runat="server"> <Header>Matenimiento</Header> <Content> <li><a href="mypagina.aspx">My página de prueba</a></li> </Content> </cc1:AccordionPane> </Panes> </cc1:Accordion>

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >