Search Results

Search found 693 results on 28 pages for 'mutli wan'.

Page 25/28 | < Previous Page | 21 22 23 24 25 26 27 28  | Next Page >

  • Salesforce/PHP - Bulk Outbound message (SOAP), Time out issue - See update #2

    - by Phill Pafford
    Salesforce can send up to 100 requests inside 1 SOAP message. While sending this type of Bulk Ooutbound message request my PHP script finishes executing but SF fails to accept the ACK used to clear the message queue on the Salesforce side of things. Looking at the Outbound message log (monitoring) I see all the messages in a pending state with the Delivery Failure Reason "java.net.SocketTimeoutException: Read timed out". If my script has finished execution, why do I get this error? I have tried these methods to increase the execution time on my server as I have no access on the Salesforce side: set_time_limit(0); // in the script max_execution_time = 360 ; Maximum execution time of each script, in seconds max_input_time = 360 ; Maximum amount of time each script may spend parsing request data memory_limit = 32M ; Maximum amount of memory a script may consume I used the high settings just for testing. Any thoughts as to why this is failing the ACK delivery back to Salesforce? Here is some of the code: This is how I accept and send the ACK file for the imcoming SOAP request $data = 'php://input'; $content = file_get_contents($data); if($content) { respond('true'); } else { respond('false'); } The respond function function respond($tf) { $ACK = <<<ACK <?xml version = "1.0" encoding = "utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <notifications xmlns="http://soap.sforce.com/2005/09/outbound"> <Ack>$tf</Ack> </notifications> </soapenv:Body> </soapenv:Envelope> ACK; print trim($ACK); } These are in a generic script that I include into the script that uses the data for a specific workflow. I can process about 25 requests (That are in 1 SOAP response) but once I go over that I get the timeout error in the Salesforce queue. for 50 requests is usually takes my PHP script 86.77 seconds. Could it be Apache? PHP? I have also tested just accepting the 100 request SOAP response and just accepting and sending the ACK the queue clears out, so I know it's on my side of things. I show no errors in the apache log, the script runs fine. I did find some info on the Salesforce site but still no luck. Here is the link. Also I'm using the PHP Toolkit 11 (From Salesforce). Other forum with good SF help Thanks for any insight into this, --Phill UPDATE: If I receive the incoming message and print the response, should this happen first regardless if I do anything else after? Or does it wait for my process to finish and then print the response? UPDATE #2: okay I think I have the problem: PHP uses the single thread processing approach and will not send back the ACK file until the thread has completed it's processing. Is there a way to make this a mutli thread process? Thread #1 - accept the incoming SOAP request and send back the ACK Thread #2 - Process the SOAP request I know I could break it up into like a DB table or flat file, but is there a way to accomplish this without doing that? I'm going to try to close the socket after the ACK submission and continue the processing, cross my fingers it will work.

    Read the article

  • Does anyone have database, programming language/framework suggestions for a GUI point of sale system

    - by Jason Down
    Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc. After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things: Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder). Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them. Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling. Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes. So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation. To summarize we are looking to do the following: Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.) Deliver a solution that is easily maintainable and supportable. A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC). Suggestions would be appreciated. Thanks [UPDATE] I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).

    Read the article

  • create graph using adjacency list

    - by sum1needhelp
    #include<iostream> using namespace std; class TCSGraph{ public: void addVertex(int vertex); void display(); TCSGraph(){ head = NULL; } ~TCSGraph(); private: struct ListNode { string name; struct ListNode *next; }; ListNode *head; } void TCSGraph::addVertex(int vertex){ ListNode *newNode; ListNode *nodePtr; string vName; for(int i = 0; i < vertex ; i++ ){ cout << "what is the name of the vertex"<< endl; cin >> vName; newNode = new ListNode; newNode->name = vName; if (!head) head = newNode; else nodePtr = head; while(nodePtr->next) nodePtr = nodePtr->next; nodePtr->next = newNode; } } void TCSGraph::display(){ ListNode *nodePtr; nodePtr = head; while(nodePtr){ cout << nodePtr->name<< endl; nodePtr = nodePtr->next; } } int main(){ int vertex; cout << " how many vertex u wan to add" << endl; cin >> vertex; TCSGraph g; g.addVertex(vertex); g.display(); return 0; }

    Read the article

  • iPhone sdk Cocoa Touch - Pass touches down from parent UIView to child UIScrollview

    - by Joe
    I have a UIView inside my xib in IB and inside that is a UIScrollview that is a small 80x80 square and dynamically loaded with 8 or so 80 x 80 thumbnail images. The UIScrollview is not clipping the images so that they extend out either side so you can swipe left and right to scroll a chosen image into the the centre, paging is on so they snap ti each image. I have researched and found this is the best and possibly only way to do this. The UIScrollview sits in a 'container' UIView for one reason, it is there to receive the touches/swipes and pass them down to it's child the UIScrollview as otherwise all touches would have to start in the small 80x80 UIScrollview area and I wan them to be anywhere along the row of images. I have seen some sample code somewhere for doing this but just can not implement it. Treat me as a noob, starting from beginning to end, how should the UIView and UIScrollview be set up in IB to allow any touches to be passed, and what code should I put into where? The UIView is set up as scroll_container and the child UIScrollview is char_scroll At the moment I have got it all working except for the touches being passed from the parent to the child, and at the moment the touches have to always start inside the UIScrollview (tiny 80x80 box in centre) when I want to be able to swipe left or right in the long 480X80 horizontal parent UIView and have this still scroll the child UIScrollview. Hope you can help and understand what I mean!

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • Telerik Silverlight RadComboBox Selected Item

    - by Chirag
    i am customize telerik Datapager control in that control create one resource file and add one combobox for change page size of grid <UserControl.Resources> ....... <telerik:RadComboBox x:Name="CmbPageSize" MinWidth="40" telerik:StyleManager.Theme="{StaticResource Theme}" ItemsSource="{Binding Path=BindPageSize, Mode=TwoWay}" SelectedItem="{Binding Path=DataPagerPageSize_string, Mode=TwoWay}"></telerik:RadComboBox> ....... Bind a combo with public string DataPagerPageSize_string { get { if (_DataPagerPageSize_string == null || _DataPagerPageSize_string == string.Empty) { //DatapagerIndex = 1; return DefaultPageSize.ToString(); } return _DataPagerPageSize_string; } set { _DataPagerPageSize_string = value; OnPropertyChanged("_DataPagerPageSize_string"); } } public List<string> BindPageSize { get { List<string> Pagerdata = new List<string>(); Pagerdata.Add("10"); Pagerdata.Add("20"); Pagerdata.Add("50"); Pagerdata.Add("100"); Pagerdata.Add("250"); Pagerdata.Add("500"); Pagerdata.Add("750"); Pagerdata.Add("1000"); Pagerdata.Add("1500"); Pagerdata.Add("2000"); Pagerdata.Add("Automatic"); Pagerdata.Add("All"); return Pagerdata; } } this is working fine in case of if i select a value from combobox but i wan to change it from code behind Like EVP.DataPagerPageSize_string = "All"; this thigs works fine but Combobox display me a old value; if i will check a property then it show me a newly set value but combobox not select newly value

    Read the article

  • XML Comparison?

    - by CrazyNick
    I got a books.xml file from http://msdn.microsoft.com/en-us/library/ms762271(VS.85).aspx and just saved it with two different names, removed few lines from the second file and tried to compare the files using the below powershell code: clear-host $xml = New-Object XML $xml = [xml](Get-Content D:\SharePoint\Powershell\Comparator\Comparator_Config.xml) $xml.config.compare | ForEach-Object { $strFile1 = get-Content $.source; $strFile2 = get-Content $.destination; $diff  = Compare-Object $strFile1 $strFile2; $result1 = $diff | where {$.SideIndicator -eq "<=" } | select InputObject; $result2 = $diff | where {$.SideIndicator -eq "=" } | select InputObject; Write-host "nEntries are differ in First web.config filen"; $result1 | ForEach-Object {write-host $.InputObject}; Write-host "nEntries are differ in Second web.config filen" ;$result2 | ForEach-Object {write-host $.InputObject}; $.parameters | ForEach-Object { write-host $.parameter } } and it is working perfectly and giving me the below results: Entries are differ in First web.config file < price36.95< /price < descriptionMicrosoft Visual Studio 7 is explored in depth, looking at how Visual Basic, Visual C++, C#, and ASP+ are integrated into a comprehensive development environment.< /description Entries are differ in Second web.config file < price< /price < description< /description however I wan to know the root node of the above mentioned nodes, is that possible to find? if so, how?

    Read the article

  • JS sort works in Firefox but not IE - can't work out why

    - by Max Williams
    I have a line in a javascript function that sort an array of objects based on the order of another array of strings. This is working in firefox but not in IE and i don't know why. Here's what my data looks like going into the sort call. (I'm using an array of three items just to illustrate the point here). //cherry first then the rest in alphabetical order originalData = ['cherry','apple','banana','clementine','nectarine','plum'] //data before sorting - note how clementine is second item - we wan to to to be after apple and banana csub = [ {"value":"cherry","data":["cherry"],"result":"cherry"}, {"value":"clementine","data":["clementine"],"result":"clementine"}, {"value":"apple","data":["apple"],"result":"apple"}, {"value":"banana","data":["banana"],"result":"banana"}, {"value":"nectarine","data":["nectarine"],"result":"nectarine"}, {"value":"plum","data":["plum"],"result":"plum"} ] //after sorting, csub has been rearranged but still isn't right: clementine is before banana csubSorted = [ {"value":"cherry","data":["cherry"],"result":"cherry"}, {"value":"apple","data":["apple"],"result":"apple"}, {"value":"clementine","data":["clementine"],"result":"clementine"}, {"value":"banana","data":["banana"],"result":"banana"}, {"value":"nectarine","data":["nectarine"],"result":"nectarine"}, {"value":"plum","data":["plum"],"result":"plum"} ] Here's the actual sort code: csubSorted = csub.sort(function(a,b){ return (originalData.indexOf(a.value) > originalData.indexOf(b.value)); }); Can anyone see why this wouldn't work? Is the basic javascript sort function not cross-browser compatible? Can i do this a different way (eg with jquery) that would be cross-browser? grateful for any advice - max

    Read the article

  • android:junit running Parameterized testcases.

    - by puneetinderkaur
    Hi, i wan to run a single testcase with different parameters. in java it possible through junit4 and code is like this package com.android.test; import static org.junit.Assert.assertEquals; import java.util.Arrays; import java.util.Collection; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; @RunWith(Parameterized.class) public class ParameterizedTestExample { private int number; private int number2; private int result; private functioncall coun; //constructor takes parameters from array public ParameterizedTestExample(int number, int number2, int result1) { this.number = number; this.number2 = number2; this.result = result1; coun = new functioncall(); } //array with parameters @Parameters public static Collection<Object[]> data() { Object[][] data = new Object[][] { { 1, 4, 5 }, { 2, 7, 9 }, { 3, 7, 10 }, { 4, 9, 13 } }; return Arrays.asList(data); } //running our test @Test public void testCounter() { assertEquals("Result add x y", result, coun.calculateSum(this.number, this.number2), 0); } } Now my question is can we Run the same with android testcases where we have deal with context. when i try to run the code using android junit. it gives me error NoClassDefinationFound.

    Read the article

  • Can't Display Bitmap of Higher Resolution than CDC area

    - by T. Jones
    Hi there dear gurus and expert coders. i am not gonna start with im a newbie and don't know much about image programming but unfortunately those are the facts :( I am trying to display an image from a bitmap pointer *ImageData which is of resolution 1392x1032. I am trying to draw that at an area of resolution or size 627x474. However, repeated trying doesnt seem to work. It works when I change the bitmap image I created from *ImageData width and height to resolution or size of around 627x474 I really do not know how to solve this after trying all possible solutions from various forums and google. pDC is a CDC* and memDC is a CDC initialized in an earlier method Anything uninitialized here was initialized in other methods. Here is my code dear humble gurus. Do provide me with guidance Yoda and Obi-Wan provided to Luke Skywalker. void DemoControl::ShowImage( void *ImageData ) { int Width; //Width of Image From Camera int Height; //Height of Image From Camera int m_DisplayWidth = 627 ;//width of rectangle area to display int m_DisplayHeight = 474;//height of rectangle area to display GetImageSize( &Width, &Height ) ; //this will return Width = 1392, Height 1032 CBitmap bitmap; bitmap.CreateBitmap(Width,Height,32,1,ImageData); CBitmap* pOldBitmap = memDC.SelectObject((CBitmap*)&bitmap); pDC->BitBlt(22, 24, 627, 474, &memDC, 0, 0, SRCCOPY); memDC.SelectObject((CBitmap*)pOldBitmap); ReleaseDC(pDC); }

    Read the article

  • How can I speed up Subversion checkins? (Using ANKH, latest, Visual Studio 2010)

    - by Timothy Khouri
    I've started working on a new web project with some friends... we are using the latest Subversion server (installed last week), the latest version of ANKH. My web project is a whapping 1.5 megabytes (that's with all images, css files, dll's after compiling, pdb files... etc). Checking in even super small changes (literally adding the letter "x" to a few files for testing)... takes FOREVER! (about 10 seconds - I almost killed myself). The ANKH client is measuring in BYTES PER SECOND ... BYTES? per second... I must be doing something wrong. Does anyone what config file has a joke totallyMessWithPeople=true so that I can turn that off or something? Oh, also, changing one "big" file of a super 10k gains speed up to nearly the speed of light (which is apparently 857 bytes per second). Help me obi wan kenobi, your my only hope! EDIT: As a note... my real work project that uses Visual Source Safe 2005 (I know, ouch) uploads files at about 200-500kbps from this very same computer/internet connection.

    Read the article

  • translate a PHP $string using google translator API

    - by Toni Michel Caubet
    hey there! been google'ing for a while how is the best way to translate with google translator in PHP, found very different ways converting URLS, or using Js but i want to do it only with php (or with a very simple solution JS/JQUery) example: //hopefully with $from_lan and $to_lan being like 'en','de', .. or similar function translate($from_lan, $to_lan, $text){ // do return $translated_text; } can you give me a clue? or maybe you already have this function.. my intention it's to use it only for the languages i have not already defined (or keys i haven't defined), that's why i wan it so simple, will be only temporal.. EDIT thanks for your replies we are now trying this soulutions: function auto_translate($from_lan, $to_lan, $text){ // do $json = json_decode(file_get_contents('https://ajax.googleapis.com/ajax/services/language/translate?v=1.0&q=' . urlencode($text) . '&langpair=' . $from_lan . '|' . $to_lan)); $translated_text = $json->responseData->translatedText; return $translated_text; } (there was a extra 'g' on variables for lang... anyway) it returns: works now :) i don't really understand much the function, so any idea why is not acepting the object? (now i do) OR: unction auto_translate($from_lan, $to_lan, $text){ // do // $json = json_decode(file_get_contents('https://ajax.googleapis.com/ajax/services/language/translate?v=1.0&q=' . urlencode($text) . '&langpair=' . $from_lan . '|' . $to_lan)); // $translated_text = $json['responseData']['translatedText']; error_reporting(1); require_once('GTranslate.php'); try{ $gt = new Gtranslate(); $translated_text = $gt-english_to_german($text); } catch (GTranslateException $ge) { $translated_text= $ge->getMessage(); } return $translated_text; } And this one looks great but it doesn't even gives me an error, the page won't load (error_report(1) :S) thanks in advance!

    Read the article

  • How to reliably identify users across Internet?

    - by amn
    I know this is a big one. In fact, it may be used for some SO community wiki. Anyways, I am running a website that DOES NOT use explicit authentication of users. It's public as in open to everybody. However, due to the nature of the service, some users need to be locked out due to misbehavior. I am currently blocking IP addresses, but I am aware of the supposed fact that many people purposefully reset their DHCP client cache to have their ISP assign them new addresses. Is that a fact? I think it certainly is a lucrative possibility for some people who want to circumvent being denied access. So IPs turn out to be a suboptimal way of dealing with this. But there is nothing else, is it? MAC addresses don't survive on WAN (change from hop to hop?), and even if they did - these can also be spoofed, although I think less easily than IP renewal. Cookies and even Flash cookies are out of the question, because there are tons of "tutorials" how to wipe these, and those intent on wreaking havoc on Internet are well aware and well equipped against such rudimentary measures I would employ. Is there anything else to lean on? I was thinking heuristical profiling - collecting available data from client-side and forming some key with it, but have not gone as far as to implementing it - is it an option?

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • iptables : how to allow incoming ftp traffic?

    - by logansama
    Hi, Still fighting my way through the jungle that is called iptables. I have managed to allow FTP access outside of our LAN: both these would work. NOTE: eth0 is the LAN interface and eth1 is the WAN interface. iptables -t filter -A FORWARD -i eth0 -p tcp --dport 20:21 -j ACCEPT or iptables -A FORWARD -i eth0 -o eth1 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But when i connect to a external FTP server i manage to log in and all is fine until it wishes to List the directory content. Then nothing happens as the data is blocked, due to the fact that i do not have a rule set up to allow it! (my last rule on the FORWARD chain is to block all traffic) I have tried a gazillion rules (many of which i did not understand) to try and allow the FTP traffic back through my server. One such rule for example was: iptables -A FORWARD -i eth1 -o eth0 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But i cannot get the List to work. It just times out after a while. Would anyone perhaps know how to build a rule which would allow FTP to List / allow such traffic back? Or have a link to sources i could work through? Thank you,

    Read the article

  • Expandable paragraphs with HTML and CSS

    - by user3704175
    I was wondering if anyone here would be as so kind as to help me out a bit. I am looking to make expandable paragraphs for my client's website. They would like to keep all of the content from their site, which is pretty massive, and they want a total overhaul of the design. They mainly wan tot keep it for SEO purposes. Anyhow, I thought it would be helpful for the both of use if there is some way to use expandable paragraphs, you know, with a "read more..." link after a certain line of text. I know that there are some JQuery and Java solutions for this, but we really would like to stay away from those options, if at all possible. When would like HTML and CSS, if we can. Here is kind of an example: HEADING HERE Paragraph with a bunch of text. I would like this to appear in a pre-determined line. For example, maybe the start of the paragraph goes on for, let's say, three lines and then we have the [read more...] When the visitor clicks "read more", we would like the rest of the content to just expand to reveal the article in its entirety. I would like for the content to already be on the page, so it just expands. I don't want it to be called in from another file or anything, if that makes sense. Thank you in advance for any and all help. It will be greatly appreciated! Testudo

    Read the article

  • Invoking different methods on threads

    - by Kraken
    I have a main process main. It creates 10 threads (say) and then what i want to do is the following: while(required){ Thread t= new Thread(new ClassImplementingRunnable()); t.start(); counter++; } Now i have the list of these threads, and for each thread i want to do a set of process, same for all, hence i put that implementation in the run method of ClassImplementingRunnable. Now after the threads have done their execution, i wan to wait for all of them to stop, and then evoke them again, but this time i want to do them serially not in parallel. for this I join each thread, to wait for them to finish execution but after that i am not sure how to evoke them again and run that piece of code serially. Can i do something like for(each thread){ t.reevoke(); //how can i do that. t.doThis(); // Also where does `dothis()` go, given that my ClassImplementingRunnable is an inner class. } Also, i want to use the same thread, i.e. i want the to continue from where they left off, but in a serial manner. I am not sure how to go about the last piece of pseudo code. Kindly help. Working with with java.

    Read the article

  • gwt get array button value

    - by graybow
    My gwt project have flexTable show data of image and button on each row and coll. But my button won't work properly. this is my current code: private Button[] b = new Button[]{new Button("a"),...,new Button("j")}; private int z=0; ... public void UpdateTabelGallery(JsArray str){ for(int i=0; i str.length(); i++){ b[i].setText(str.gettitle()); UpdateTabelGallery(str.get(i)); } } public void UpdateTabelGallery(GalleryData str){ Image img = new Image(); img.setUrl(str.getthumburl()); HTML himage= new HTML("a href="+str.geturl()+""+ img +"/a" + b[z] ); TabelGaleri.setWidget(y, x, himage); //is here th right place? b[z].addClickHandler(new ClickHandler(){ @Override public void onClick(ClickEvent event) { Window.alert("I wan to show the clicked button text" + b[z].getText()); } }); z++; } I'm still confuse where I should put my button handler. With this current code seems the clickhandler didn't work inside a looping. And if I put it outside loop its not working because I need to know which button clicked. I need to get my index button.but how? Is there any option than array button? thanks

    Read the article

  • How to current snapshot of MySQL Table and store it into CSV file(after creating it) ?

    - by Rachel
    I have large database table, approximately 5GB, now I wan to getCurrentSnapshot of Database using "Select * from MyTableName", am using PDO in PHP to interact with Database. So preparing a query and then executing it // Execute the prepared query $result->execute(); $resultCollection = $result->fetchAll(PDO::FETCH_ASSOC); is not an efficient way as lots of memory is being user for storing into the associative array data which is approximately, 5GB. My final goal is to collect data returned by Select query into an CSV file and put CSV file at an FTP Location from where Client can get it. Other Option I thought was to do: SELECT * INTO OUTFILE "c:/mydata.csv" FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY "\n" FROM my_table; But I am not sure if this would work as I have cron that initiates the complete process and we do not have an csv file, so basically for this approach, PHP Scripts will have to create an CSV file. Do a Select query on the database. Store the select query result into the CSV file. What would be the best or efficient way to do this kind of task ? Any Suggestions !!!

    Read the article

  • How to Install Oracle Software on Remote Linux Server

    - by James Taylor
    It is becoming more common these days to install Oracle software on remote Linux servers. This issue has always existed but was generally resolved either by silent installs or by someone physically going to the server to install the software. This is becoming more difficult with the popular virtualisation and cloud deployment strategies. This post provides the steps involved to install Oracle Software using the GUI interface on a remote Linux server. There are many ways to achieve this, the way I resolve this issue is via Virtual Network Computing (VNC) as it is shipped with RedHat and OEL out of the box. For this post I’m using OEL 5 deployed on a OVM guest. If not already done so download and install a client version of VNC so you can connect to the server. There are many out there, for the purpose of this post I use UltraVNC. You can download a free version from http://www.uvnc.com/download/index.html By default VNC Server is installed in your RedHat and OEL OS, but it is not configured. The way VNC works is when started it creates a client instance for the user and binds it to a specific port. So if have an account on the Linux box you can setup a VNC Server session for that user, you don’t need to be root. For the purpose of this document I’m going to use oracle as the user to setup a VNC Session as this is the user I want use to install the software. However to start the VNC Service you must be root. As the root user run the following command: service vncserver start Starting VNC server: no displays configured                [  OK  ] Login to the Linux box as the user  you wan to install the Oracle software [oracle@lisa ~]$ Run the command to create a new VNC server instance for the oracle user: vncserver You will be ask to supply password information. This is what you will enter when connecting from your desktop client. This password is also independent of the actual Linux user password. The VNC Server is acting as a proxy to this instance. You will require a password to access your desktops. Password: Verify: xauth:  creating new authority file /home/oracle/.Xauthority New 'lisa.nz.oracle.com:1 (oracle)' desktop is lisa.nz.oracle.com:1 Creating default startup script /home/oracle/.vnc/xstartup Starting applications specified in /home/oracle/.vnc/xstartup Log file is /home/oracle/.vnc/lisa.nz.oracle.com:1.log As you can see a new instance lisa.nz.oracle.com:1 has been created. If you were to run the vncserver command again another instance lisa.nz.oracle.com:2 will be created. If you are going through a firewall you will need to ensure that the port 5901 (port 1) is open between your client desktop and the Linux Server. Depending on the options chosen at install time a firewall could be in place. The simplest way to disable this is using the command. You will need to be root. service iptables stop This will stop the firewall while you install. If you just want to add a port to the accepted lists use the firewall UI. You will need to be root. system-config-security-level Now you are ready to connect to the server via the VNC. Using the software installed in step one start the VNC Client. You should be prompted for the server and port. If connectivity is established, you will be prompted for the password entered in step 5. You should now be presented with a terminal screen ready to install software Go to the location of the oracle install software and start the Oracle Universal Installer

    Read the article

  • OBIEE 11.1.1 - User Interface (UI) Performance Is Slow With Internet Explorer 8

    - by Ahmed A
    The OBIEE 11g UI is performance is slow in IE 8 and faster in Firefox.  For VPN or WAN users, it takes long time to display links on Dashboards via IE 8. Cause is IE 8 generates many HTTP 304 return calls and this caused the 11g UI slower when compared to the Mozilla FireFox browser. To resolve this issue, you can implement HTTP compression and caching. This is a best practice.Why use Web Server Compression / Caching for OBIEE? Bandwidth Savings: Enabling HTTP compression can have a dramatic improvement on the latency of responses. By compressing static files and dynamic application responses, it will significantly reduce the remote (high latency) user response time. Improves request/response latency: Caching makes it possible to suppress the payload of the HTTP reply using the 304 status code.  Minimizing round trips over the Web to re-validate cached items can make a huge difference in browser page load times. This screen shot depicts the flow and where the compression and decompression occurs: Solution: a. How to Enable HTTP Caching / Compression in Oracle HTTP Server (OHS) 11.1.1.x 1. To implement HTTP compression / caching, install and configure Oracle HTTP Server (OHS) 11.1.1.x for the bi_serverN Managed Servers (refer to "OBIEE Enterprise Deployment Guide for Oracle Business Intelligence" document for details). 2. On the OHS machine, open the file HTTP Server configuration file (httpd.conf) for editing. This file is located in the OHS installation directory.For example: ORACLE_HOME/Oracle_WT1/instances/instance1/config/OHS/ohs13. In httpd.conf file, verify that the following directives are included and not commented out: LoadModule expires_module "${ORACLE_HOME}/ohs/modules/mod_expires.soLoadModule deflate_module "${ORACLE_HOME}/ohs/modules/mod_deflate.so 4. Add the following lines in httpd.conf file below the directive LoadModule section and restart the OHS: Note: For the Windows platform, you will need to enclose any paths in double quotes ("), for example:Alias "/analytics ORACLE_HOME/bifoundation/web/app"<Directory "ORACLE_HOME/bifoundation/web/app"> Alias /analytics ORACLE_HOME/bifoundation/web/app#Pls replace the ORACLE_HOME with your actual BI ORACLE_HOME path<Directory ORACLE_HOME/bifoundation/web/app>#We don't generate proper cross server ETags so disable themFileETag noneSetOutputFilter DEFLATE# Don't compress imagesSetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary<FilesMatch "\.(gif|jpeg|png|js|x-javascript|javascript|css)$">#Enable future expiry of static filesExpiresActive onExpiresDefault "access plus 1 week"     #1 week, this will stops the HTTP304 calls i.e. generated by IE 8Header set Cache-Control "max-age=604800"</FilesMatch>DirectoryIndex default.jsp</Directory>#Restrict access to WEB-INF<Location /analytics/WEB-INF>Order Allow,DenyDeny from all</Location> Note: Make sure you replace above placeholder "ORACLE_HOME" to your correct path for BI ORACLE_HOME.For example: my BI Oracle Home path is /Oracle/BIEE11g/Oracle_BI1/bifoundation/web/app Important Notes: Above caching rules restricted to static files found inside the /analytics directory(/web/app). This approach is safer instead of setting static file caching globally. In some customer environments you may not get 100% performance gains in IE 8.0 browser. So in that case you need to extend caching rules to other directories with static files content. If OHS is installed on separate dedicated machine, make sure static files in your BI ORACLE_HOME (../Oracle_BI1/bifoundation/web/app) is accessible to the OHS instance. The following screen shot summarizes the before and after results and improvements after enabling compression and caching:

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • MySQL Cluster 7.3: On-Demand Webinar and Q&A Available

    - by Mat Keep
    The on-demand webinar for the MySQL Cluster 7.3 Development Release is now available. You can learn more about the design, implementation and getting started with all of the new MySQL Cluster 7.3 features from the comfort and convenience of your own device, including: - Foreign Key constraints in MySQL Cluster - Node.js NoSQL API  - Auto-installation of higher performance distributed, clusters We received some great questions over the course of the webinar, and I wanted to share those for the benefit of a broader audience. Q. What Foreign Key actions are supported: A. The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL Q. Where are Foreign Keys implemented, ie data nodes or SQL nodes? A. They are implemented in the data nodes, therefore can be enforced for both the SQL and NoSQL APIs Q. Are they compatible with the InnoDB Foreign Key implementation? A. Yes, with the following exceptions: - InnoDB doesn’t support “No Action” constraints, MySQL Cluster does - You can choose to suspend FK constraint enforcement with InnoDB using the FOREIGN_KEY_CHECKS parameter; at the moment, MySQL Cluster ignores that parameter. - You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB. - You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected. Q. Does adding or dropping Foreign Keys cause downtime due to a schema change? A. Nope, this is an online operation. MySQL Cluster supports a number of on-line schema changes, ie adding and dropping indexes, adding columns, etc. Q. Where can I see an example of node.js with MySQL Cluster? A. Check out the tutorial and download the code from GitHub Q. Can I use the auto-installer to support remote deployments? How about setting up MySQL Cluster 7.2? A. Yes to both! Q. Can I get a demo Check out the tutorial. You can download the code from http://labs.mysql.com/ Go to Select Build drop-down box Q. What is be minimum internet speen required for Geo distributed cluster with synchronous replication? A. if you're splitting you cluster between sites then we recommend a network latency of 20ms or less. Alternatively, use MySQL asynchronous replication where the latency of your WAN doesn't impact the latency of your reads/writes. Q. Where you can one learn more about the PayPal project with MySQL Cluster? A. Take a look at the following - you'll find press coverage, a video and slides from their keynote presentation  So, if you want to learn more, listen to the new MySQL Cluster 7.3 on-demand webinar  MySQL Cluster 7.3 is still in the development phase, so it would be great to get your feedback on these new features, and things you want to see!

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • XNA Multiplayer Games and Networking

    - by JoshReuben
    ·        XNA communication must by default be lightweight – if you are syncing game state between players from the Game.Update method, you must minimize traffic. That game loop may be firing 60 times a second and player 5 needs to know if his tank has collided with any player 3 and the angle of that gun turret. There are no WCF ServiceContract / DataContract niceties here, but at the same time the XNA networking stack simplifies the details. The payload must be simplistic - just an ordered set of numbers that you would map to meaningful enum values upon deserialization.   Overview ·        XNA allows you to create and join multiplayer game sessions, to manage game state across clients, and to interact with the friends list ·        Dependency on Gamer Services - to receive notifications such as sign-in status changes and game invitations ·        two types of online multiplayer games: system link game sessions (LAN) and LIVE sessions (WAN). ·        Minimum dev requirements: 1 Xbox 360 console + Creators Club membership to test network code - run 1 instance of game on Xbox 360, and 1 on a Windows-based computer   Network Sessions ·        A network session is made up of players in a game + up to 8 arbitrary integer properties describing the session ·        create custom enums – (e.g. GameMode, SkillLevel) as keys in NetworkSessionProperties collection ·        Player state: lobby, in-play   Session Types ·        local session - for split-screen gaming - requires no network traffic. ·        system link session - connects multiple gaming machines over a local subnet. ·        Xbox LIVE multiplayer session - occurs on the Internet. Ranked or unranked   Session Updates ·        NetworkSession class Update method - must be called once per frame. ·        performs the following actions: o   Sends the network packets. o   Changes the session state. o   Raises the managed events for any significant state changes. o   Returns the incoming packet data. ·        synchronize the session à packet-received and state-change events à no threading issues   Session Config ·        Session host - gaming machine that creates the session. XNA handles host migration ·        NetworkSession properties: AllowJoinInProgress , AllowHostMigration ·        NetworkSession groups: AllGamers, LocalGamers, RemoteGamers   Subscribe to NetworkSession events ·        GamerJoined ·        GamerLeft ·        GameStarted ·        GameEnded – use to return to lobby ·        SessionEnded – use to return to title screen   Create a Session session = NetworkSession.Create(         NetworkSessionType.SystemLink,         maximumLocalPlayers,         maximumGamers,         privateGamerSlots,         sessionProperties );   Start a Session if (session.IsHost) {     if (session.IsEveryoneReady)     {        session.StartGame();        foreach (var gamer in SignedInGamer.SignedInGamers)        {             gamer.Presence.PresenceMode =                 GamerPresenceMode.InCombat;   Find a Network Session AvailableNetworkSessionCollection availableSessions = NetworkSession.Find(     NetworkSessionType.SystemLink,       maximumLocalPlayers,     networkSessionProperties); availableSessions.AllowJoinInProgress = true;   Join a Network Session NetworkSession session = NetworkSession.Join(     availableSessions[selectedSessionIndex]);   Sending Network Data var packetWriter = new PacketWriter(); foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Get the tank associated with this player.     Tank myTank = gamer.Tag as Tank;     // Write the data.     packetWriter.Write(myTank.Position);     packetWriter.Write(myTank.TankRotation);     packetWriter.Write(myTank.TurretRotation);     packetWriter.Write(myTank.IsFiring);     packetWriter.Write(myTank.Health);       // Send it to everyone.     gamer.SendData(packetWriter, SendDataOptions.None);     }   Receiving Network Data foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Keep reading while packets are available.     while (gamer.IsDataAvailable)     {         NetworkGamer sender;          // Read a single packet.         gamer.ReceiveData(packetReader, out sender);          if (!sender.IsLocal)         {             // Get the tank associated with this packet.             Tank remoteTank = sender.Tag as Tank;              // Read the data and apply it to the tank.             remoteTank.Position = packetReader.ReadVector2();             …   End a Session if (session.AllGamers.Count == 1)         {             session.EndGame();             session.Update();         }   Performance •        Aim to minimize payload, reliable in order messages •        Send Data Options: o   Unreliable, out of order -(SendDataOptions.None) o   Unreliable, in order (SendDataOptions.InOrder) o   Reliable, out of order (SendDataOptions.Reliable) o   Reliable, in order (SendDataOptions.ReliableInOrder) o   Chat data (SendDataOptions.Chat) •        Simulate: NetworkSession.SimulatedLatency , NetworkSession.SimulatedPacketLoss •        Voice support – NetworkGamer properties: HasVoice ,IsTalking , IsMutedByLocalUser

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28  | Next Page >