Search Results

Search found 20816 results on 833 pages for 'vsphere client'.

Page 764/833 | < Previous Page | 760 761 762 763 764 765 766 767 768 769 770 771  | Next Page >

  • WCF - Return object without serializing?

    - by Mayo
    One of my WCF functions returns an object that has a member variable of a type from another library that is beyond my control. I cannot decorate that library's classes. In fact, I cannot even use DataContractSurrogate because the library's classes have private member variables that are essential to operation (i.e. if I return the object without those private member variables, the public properties throw exceptions). If I say that interoperability for this particular method is not needed (at least until the owners of this library can revise to make their objects serializable), is it possible for me to use WCF to return this object such that it can at least be consumed by a .NET client? How do I go about doing that? Update: I am adding pseudo code below... // My code, I have control [DataContract] public class MyObject { private TheirObject theirObject; [DataMember] public int SomeNumber { get { return theirObject.SomeNumber; } // public property exposed private set { } } } // Their code, I have no control public class TheirObject { private TheirOtherObject theirOtherObject; public int SomeNumber { get { return theirOtherObject.SomeOtherProperty; } set { // ... } } } I've tried adding DataMember to my instance of their object, making it public, using a DataContractSurrogate, and even manually streaming the object. In all cases, I get some error that eventually leads back to their object not being explicitly serializable.

    Read the article

  • Playing FLV unexpectedly stops, but why?

    - by Josamoto
    I am trying to play an FLV video that is about 90Mb big, using the following snippet: private function playVideo(url:String):void { var customClient:Object = new Object(); customClient.onMetaData = metaDataHandler; var nc:NetConnection = new NetConnection(); nc.connect(null); var ns:NetStream = new NetStream(nc); ns.client = customClient; ns.play(url); var myVideo:Video = new Video(); myVideo.attachNetStream(ns); addChild(myVideo); function metaDataHandler(infoObject:Object):void { myVideo.width = 640; myVideo.x = stage.stageWidth / 2 - 640 / 2; myVideo.height = 400; myVideo.y = 230; } } The video is not going to be streamed across the internet, in fact, the application will run locally and videos will also be loaded locally from disc. When starting my application, the video starts, but at random locations, playback completely freezes up, without any errors being thrown at all. Does anybody have any idea as to why this might happen?

    Read the article

  • Seeking good practice advice: multisite in Drupal

    - by deanloh
    I'm using multisite to host my client sites. During development stage, I use subdomain to host the staging site, e.g. client1.mydomain.com. And here's how it look under the SITES folder: /sites/client1.mydomain.com When the site is completed and ready to go live, I created another folder for the actual domain, e.g. client1.com. Hence: /sites/client1.com Next, I created symlinks under client1.com for FILES and SETTINGS.PHP that points to the subdomain i.e. /sites/client1.com/settings.php --> /sites/client1.mydomain.com/settings.php /sites/client1.com/files --> /sites/client1.mydomain.com/files Finally, to prevent Google from indexing both the subdomain and actual domain, I created the rule in .htaccess to rewrite client1.mydomain.com to client1.com, therefore, should anyone try to access the subdomain, he will be redirected to the actual domain. This above arrangement works perfectly fine. But I somehow feel there is a better way to achieve the above in much simplified manner. Please feel free to share your views and all advice is much appreciated.

    Read the article

  • How to handle javascript & css files across a site?

    - by Industrial
    Hi everybody, I have had some thoughts recently on how to handle shared javascript and css files across a web application. In a current web application that I am working on, I got quite a large number of different javascripts and css files that are placed in an folder on the server. Some of the files are reused, while others are not. In a production site, it's quite stupid to have a high number of HTTP requests and many kilobytes of unnecessary javascript and redundant css being loaded. The solution to that is of course to create one big bundled file per page that only contains the necessary information, which then is minimized and sent compressed (GZIP) to the client. There's no worries to create a bundle of javascript files and minimize them manually if you were going to do it once, but since the app is continuously maintained and things do change and develop, it quite soon becomes a headache to do this manually while pushing out new updates that features changes to javascripts and/or css files to production. What's a good approach to handle this? How do you handle this in your application?

    Read the article

  • How can I hide this overlay element intelligently?

    - by inkedmn
    The site I'm working on has a collection of navigation elements across the top ("Products", "Company", etc.). When you mouse over the Products link, an overlay appears that shows a list of products with links to each. There's a small link at the top of the container that, when clicked, closes the container. All of this works as advertised. The client has asked that, once a user's mouse pointer is a sufficient distance from the overlay element, the overlay element would close (without them having to click the 'close' link). This element appears on multiple pages that have disparate content, so I'm afraid it won't be as simple as adding a mouseover listener to another single element within the page and have it work everywhere. My question, I suppose, is this: is there a relatively easy way to know when the mouse cursor is x pixels away from this container and trigger an event when this occurs? My other thought is that I could just find several elements on the page that fit this criteria and add mouseover listeners to each, but I'm assuming there's a more elegant way of handling this. Thanks in advance - and please let me know if more detail is needed.

    Read the article

  • live image edit , and crop

    - by 422
    I was just thinking, which is always dangerous. We use Valums Image uploader. Aside from that, I am looking for an inline image editor, but with a difference. User uploads an image ( lets say 800 x 600 ) Our system wants to see the image ( 170 x 32 ) now I know we can use php to resize images. But I was thinking, does anyone know of a system, where we can display the image, and user can scale, and crop image ( with say a predefined overlay ) By that they scale down to nearest acceptable size, and then click crop tool, which shows a div overlay with say 70% transparency that they can drag over the image, and then click crop. So image is cropped to exact size we need, then can save . I am sure I have seen some jquery stuff done like this, just cannot for life of me find it. Essentialy, we would like to offer a simple client side image processor, thats lightweight, and then the ability to save what they did . Sorry no code to show, as its more of a request. Regards

    Read the article

  • Highlighting a custom UIButton

    - by Dan Ray
    The app I'm building has LOTS of custom UIButtons laying over top of fairly precisely laid out images. Buttonish, controllish images and labels and what have you, but with a clear custom-style UIButton sitting over top of it to handle the tap. My client yesterday says, "I want that to highlight when you tap it". Never mind that it immediately pushes on a new uinavigationcontroller view... it didn't blink, and so he's confused. Oy. Here's what I've done to address it. I don't like it, but it's what I've done: I subclassed UIButton (naming it FlashingUIButton). For some reason I couldn't just configure it with a background image on control mode highlighted. It never seemed to hit the state "highlighted". Don't know why that is. So instead I wrote: -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [self setBackgroundImage:[UIImage imageNamed:@"grey_screen"] forState:UIControlStateNormal]; [self performSelector:@selector(resetImage) withObject:nil afterDelay:0.2]; [super touchesBegan:touches withEvent:event]; } -(void)resetImage { [self setBackgroundImage:nil forState:UIControlStateNormal]; } This happily lays my grey_screen.png (a 30% opaque black box) over the button when it's tapped and replaces it with happy emptyness .2 of a second later. This is fine, but it means I have to go through all my many nibs and change all my buttons from UIButtons to FlashingUIButtons. Which isn't the end of the world, but I'd really hoped to address this as a UIButton category, and hit all birds with one stone. Any suggestions for a better approach than this one?

    Read the article

  • Chat app vs REST app - use a thread in an Activity or a thread in a Service?

    - by synic
    In Virgil Dobjanschi's talk, "Developing Android REST client applications" (link here), he said a few things that took me by surprise. Including: Don't run http queries in threads spawned by your activities. Instead, communicate with a service to do them, and store the information in a ContentProvider. Use a ContentObserver to be notified of changes. Always perform long running tasks in a Service, never in your Activity. Stop your Service when you're done with it. I understand that he was talking about a REST API, but I'm trying to make it fit with some other ideas I've had for apps. One of APIs I've been using uses long-polling for their chat interface. There is a loop http queries, most of which will time out. This means that, as long as the app hasn't been killed by the OS, or the user hasn't specifically turned off the chat feature, I'll never be done with the Service, and it will stay open forever. This seems less than optimal. Long question short: For a chat application that uses long polling to simulate push and immediate response, is it still best practice to use a Service to perform the HTTP queries, and store the information in a ContentProvider?

    Read the article

  • Which CSS identifier is used for the selected tab in tabbed tables in browsers other than IE?

    - by David Navarre
    When you have a table on a form in Notes, you can choose to display only one row at a time (via the Special Table Row Display parameter on the Table Rows tab of the Table properties). In a Notes document displayed using Internet Explorer that contains such a table, a row is displayed with a cell for each "tab". The TD that serves as the tab for the selected "Notes table row" is assigned <td class="dominoSelTopTab">, while the other tabs get <td class="dominoTopTab">. However, when using other browsers, it's not nearly as simple. In Firefox, each "tab" ends up as a single-celled-single-row-table within the table with very little to identify it. <td><table border="1" cellpadding="2"> <tr><td><div align="center"><b>Tab 2</b></div></td></tr> </table></td> A non-selected tab would show as follows: <td><table border="1" cellpadding="2"> <tr><td><div align="center"><a name="1." href="/Projects/MyCSS.nsf/0c3b9489476440c085257a62006d97d6/d482a1767a4af77f85257a62006db064?OpenDocument&amp;TableRow=1.0#1." target="_self">Tab 1</a></div></td></tr> </table></td> So, the question is, how do I identify the selected tabs and the non-selected tabs when not using IE? Note: For those who are not Notes developers, the HTML is auto-generated from the visual design as laid out in the Notes designer client. I would replace it all with manual HTML, except there is so much of it that doing so would consume far too much time.

    Read the article

  • Accepting a socket on Windows 7 takes more than a second

    - by eburger
    Here's what I've done: I wrote a minimal web server (using Qt, but I don't think it's relevant here). I'm running it on a legal Windows 7 32-bit. The problem: If I make a request with Firefox, IE, Chrome or Safari it takes takes about one second before my server sees that there is a new connection to be accepted. Clues: Using other clients (wget, own test client that just opens a socket) than Firefox, IE, Chrome, Safari seeing the new connection is matter of milliseconds. I installed Apache and tried the clients mentioned above. Serving the request takes ~50ms as expected. The problem isn't reproducible when running Windows XP (or compiling and running the same code under Linux) The problem seems to present itself only when connecting to localhost. A friend connected over the Internet and serving the connection was a matter of milliseconds. Running the server in different ports has no effect on the 1 second latency Here's what I've tried without luck: Stopped the Windows Defender service Stopped the Windows Firewall service Any ideas? Is this some clever 'security feature' in Windows 7? Why isn't Apache affected? Why are only the browsers affected?

    Read the article

  • Delay PHP execution until JavaScript cookie set?

    - by Adam184
    I am trying to delay PHP execution until a cookie is set through JavaScript. The code is below, I trimmed the createCookie JavaScript function for simplicity (I've tested the function itself and it works). <?php if(!isset($_COOKIE["test"])) { ?> <script type="text/javascript"> $(function() { // createCookie script createCookie("test", 1, 3600); }); </script> <?php // Reload the page to ensure cookie was set if(!isset($_COOKIE["test"])) { header("Location: http://localhost/asdf.php/"); } } ?> At first I had no idea why this didn't work, however after using microtime() I figured out that the PHP after the <script> was executing before the jQuery ready function. I reduced my code significantly to show a simple version that is answerable, I am well aware that I am able to use setcookie() in PHP, the requirements for the cookie are client-side. I understand mixing PHP and JavaScript is incorrect, but any help on how to make this work (is there a PHP delay? - I tried sleep(), didn't work and didn't think it would work, since the scripts would be delayed as well) would be greatly appreciated.

    Read the article

  • How would you structure your workflow for a web application ?

    - by cx42net
    Hi ! When designing a web application (or something else), it's good to have a workflow, and it's better to have a well ordered one. Starting with this idea in mind, I'd like to know what is your process from having an idea to maintain this great working project. For me actually, the process is the following one : Having the idea Checking this project already exists, and how it works Describing on a paper its functionalities finding a good and adequate name for this (and checking the domain availability with WHOISMyProject) Making a quick layout of the project on a paper Designing the project (via TheGimp, Photoshop, etc) Making a complete mockup of each pages Developing a prototype of the client-side application (with false datas) Developing the server side Testing Making the documentation/help/faq Releasing the project Maintaining it. Would you change the order of some points ? add/remove some ? I would please to know how you do that. I'm looking to set up a perfect workflow in order to make my project become real in the best way possible. Thank you for your opinion !

    Read the article

  • Actionscript flex sockets and telnet

    - by MAC
    I am trying to make a flex application where it gets data from a telnet connection and I am running into a weird problem. To give a brief introduction, i want to read data from a process that exposes it through a socket. So if in the shell i type telnet localhost 8651i receive the xml and then the connection is closed (I get the following Connection closed by foreign host.) Anyway i found a simple tutorial online for flex that essentially is a telnet client and one would expect it to work but everything follows Murphy's laws and nothing ever works! Now i have messages being printed in every event handler and all places that i can think off. When i connect to the socket nothing happens, no event handler is triggered even the connect or close handler and if i do the following the socket.connected returns false! I get no errors, try catch raises no exception. I am at a loss as to whats going wrong? socket.connect(serverURL, portNumber); msg(socket.connected.toString()); Is there something about telnet that i do not know and its causing this to not work. Whats more interesting is why none of the events get fired. Another interesting thing is that i have some python code that does the same thing and its able to get the xml back! The following is the python code that works! def getStats(host, port): sock = socket.socket() sock.connect((host, port)) res = sock.recv(1024*1024*1024, socket.MSG_WAITALL) sock.close() return statFunc(res) So i ask you whats going wrong!!!!!! Is there some inherent problem with how flex handles sockets?

    Read the article

  • Why is function's length information of other shared lib in ELF?

    - by minastaros
    Our project (C++, Linux, gcc, PowerPC) consists of several shared libraries. When releasing a new version of the package, only those libs should change whose source code was actually affected. With "change" I mean absolute binary identity (the checksum over the file is compared. Different checksum - different version according to the policy). (I should mention that the whole project is always built at once, no matter if any code has changed or not per library). Usually this can by achieved by hiding private parts of the included Header files and not changing the public ones. However, there was a case where just a delete was added to the destructor of a class TableManager (in the TableManager.cpp file!) of library libTableManager.so, and yet the binary/checksum of library libB.so (which uses class TableManager ) has changed. TableManager.h: class TableManager { public: TableManager(); ~TableManager(); private: int* myPtr; } TableManager.cpp: TableManager::~TableManager() { doSomeCleanup(); delete myPtr; // this delete has been added } By inspecting libB.so with readelf --all libB.so, looking at the .dynsym section, it turned out that the length of all functions, even the dynamically used ones from other libraries, are stored in libB! It looks like this (length is the 668 in the 3rd column): 527: 00000000 668 FUNC GLOBAL DEFAULT UND _ZN12TableManagerD1Ev So my questions are: Why is the length of a function actually stored in the client lib? Wouldn't a start address be sufficient? Can this be suppressed somehow when compiling/linking of libB.so (kind of "stripping")? We would really like to reduce this degree of dependency...

    Read the article

  • What is the fastest way to insert 100 000 records from one database to another?

    - by Pentium10
    I have a mobile application. My client has a large data set ~100.000 records. It's updated frequently. When we sync we need to copy from one database to another. I have attached the second database to the main, and run an insert into table select * from sync.table. This is extremely slow, it takes about 10 minutes I think. I noticed that the journal file gets increased step by step. How can I speed this up? EDITED 1 I have indexes off, and I have journal off. Using insert into table select * from sync.table it still takes 10 minutes. EDITED 2 If I run a query like select id,invitem,invid,cost from inventory where itemtype = 1 order by invitem limit 50 it takes 15-20 seconds. The table schema is: CREATE TABLE inventory ('id' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'serverid' INTEGER NOT NULL DEFAULT 0, 'itemtype' INTEGER NOT NULL DEFAULT 0, 'invitem' VARCHAR, 'instock' FLOAT NOT NULL DEFAULT 0, 'cost' FLOAT NOT NULL DEFAULT 0, 'invid' VARCHAR, 'categoryid' INTEGER DEFAULT 0, 'pdacategoryid' INTEGER DEFAULT 0, 'notes' VARCHAR, 'threshold' INTEGER NOT NULL DEFAULT 0, 'ordered' INTEGER NOT NULL DEFAULT 0, 'supplier' VARCHAR, 'markup' FLOAT NOT NULL DEFAULT 0, 'taxfree' INTEGER NOT NULL DEFAULT 0, 'dirty' INTEGER NOT NULL DEFAULT 1, 'username' VARCHAR, 'version' INTEGER NOT NULL DEFAULT 15 ) Indexes are created like CREATE INDEX idx_inventory_categoryid ON inventory (pdacategoryid); CREATE INDEX idx_inventory_invitem ON inventory (invitem); CREATE INDEX idx_inventory_itemtype ON inventory (itemtype); I am wondering, the insert into ... select * from isn't the fastest built-in way to do massive data copy? EDITED 3 SQLite is serverless, so please stop voting a particular answer, because that is not the answer I'm sure.

    Read the article

  • Nginx Rails app can't deploy

    - by user3596718
    I have an issue with my rails application running with passenger and nginx hosted in Ubuntu 12.04. In the nginx.conf file below, my "example.com" (Regular HTML) and "redmine.example.com" (Rails app) are working perfectly, but my "crete.example.com" (Another Rails app) is showing "502 bad gateway". I have them both hosted in /var/data with the same permissions and ownerships, also tried different ports, I can't think of something else to try. worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server{ listen 80; server_name example.com; root /opt/nginx/html; } server{ server_name redmine.example.com; root /var/data/redmine/public; passenger_enabled on; location ~ ^/<SUBURI>(/.*|$){ alias /var/data/redmine/public$1; passenger_base_uri /redmine; passenger_app_root /var/data/redmine; passenger_document_root /var/data/redmine/public; passenger_enabled on;} } server{ server_name crete.example.com; root /var/data/crete/public; passenger_enabled on; location ~ ^/<SUBURI>(/.*|$){ alias /var/data/crete/public$1; passenger_base_uri /crete; passenger_app_root /var/data/crete; passenger_document_root /var/data/crete/public; passenger_enabled on;} } } This are my Ruby and Rails versions: ruby 2.0.0p451 (2014-02-24 revision 45167) [x86_64-linux] Rails 4.1.0 My nginx error.log 2014/05/02 12:29:50 [error] 3343#0: *4 upstream prematurely closed connection while reading response header from upstream, client: xxx.xx.xx.xx, server: crete.example.com, request: "GET / HTTP/1.1", upstream: "passenger:/tmp/passenger.1.0.3 323/generation-0/request:", host: "crete.example.com" Any other conf file you might need to solve this don't hesitate to ask.

    Read the article

  • passing xml to a webservice

    - by Neale
    I have a simple web service that will have one method: DoTransactions(xlm) Now the reason that i am using XML as a parameter is due to the fact that the parameters will often change. So for example it could be: <payload> <userId>1234</userid> <partnerId>ptn654</partnerId> </payload> OR <payload> <partnerId>ptn654</partnerId> <items> <item1> <cost>10</cost> <description>This is item 1</description> </item1> </items> </payload> As you can see the XML string will always change (this is due to a client request) Would it be better to rather pass in a string and parse the XML in the method or should is there a better way to do it. This web service will be used for varios different code languages.

    Read the article

  • Safe json parsing with jquery?

    - by user246114
    Hi, I am using jquery with json. My client pages generate json, which I store on my server. The clients can then fetch the json back out later, parse, and show it. Since my clients are generating the json, it may not be safe. I think jquery uses eval() internally. Is that true? Is there a way to use the native json parsers from the browsers where available, otherwise fall back to manual parsing if not? I'm new to jquery so I don't know where I'd insert my own parsing code. I'm doing something like: $.ajax({ url: 'myservlet', type: 'GET', dataType: 'json', timeout: 1000, error: function(){ alert('Error loading JSON'); }, success: function(json){ alert("It worked!: " + json.name + ", " + json.grade); } }); so in the success() method, the json object is already parsed for me. Is there a way to catch it as a raw string first? Then I can decide whether to use the native parsers or manual parsing (hoping there's a jquery plugin for that..). The articles I'm reading are all from different years, so I don't know if jquery has already abandoned eval() already for json, Thank you

    Read the article

  • problem with a string's format in c++ while doing tcp communication

    - by james t
    hi, i am building a simple c++ client, i am splitting the info i get from the server to frames, and pass each frame to a function that processes it, i split the frame into lines using Poco::StringTokenizer tokenizer(frame, "\n"); i take the first line of the tokenizer which represents the type of frame StmpCommand command(tokenizer[0]); a StmpCommand is an enum with the different types of messages and the constructor works as follows : StmpCommand(std::string command): commandType_() { bool x=command=="CONNECTED"; std::cout<<x<<std::endl; if ("SUBSCRIBE" == command) commandType_ = SUBSCRIBE; else if ("UNSUBSCRIBE" == command) commandType_ = UNSUBSCRIBE; else if ("SEND" == command) commandType_ = SEND; else if ("BEGIN" == command) commandType_ = BEGIN; else if ("COMMIT" == command) commandType_ = COMMIT; else if ("CONNECT" == command) commandType_ = CONNECT; else if ("MESSAGE" == command) commandType_ = MESSAGE; else if ("RECEIPT" == command) commandType_ = RECEIPT; else if ("CONNECTED" == command) commandType_ = CONNECTED; else if ("DISCONNECT" == command) commandType_ = DISCONNECT; else if ("ERROR" == command) commandType_ = ERROR; else { std::cerr<<"Error in building StmpCommand object, unknown type - "<<command<<std::endl; } } the first frame i am trying to proccess is a CONNECTED frame therefor i try to create a StmpCommand with CONNECTED as the constructor's only argument and for some reason i am getting an : Error in building StmpCommand object, unknown type - CONNECTED i am clearly passing a string containing CONNECTED but i'm guessing there is something else there that isn't allowing the condition else if ("CONNECTED" == command) to hap

    Read the article

  • How security of the systems might be improved using database procedures?

    - by Centurion
    The usage of Oracle PL/SQL procedures for controlling access to data often emphasized in PL/SQL books and other sources as being more secure approach. I'v seen several systems where all business logic related with data is performed through packages, procedures and functions, so application code becomes quite "dumb" and is only responsible for visualization part. I even heard some devs call such approaches and driving architects as database nazi :) because all logic code resides in database. I do know about DB procedure performance benefits, but now I'm interested in a "better security" when using thick client model. I assume such design mostly used when Oracle (and maybe MS SQL Server) databases are used. I do agree such approach improves security but only if there are not much users and every system user has a database account, so we might control and monitor data access through standard database user security. However, how such approach could increase the security for an average web system where thick clients are used: for example one database user with DML grants on all tables, and other users are handled using "users" and"user_rights" tables? We could use DB procedures, save usernames into context use that for filtering but vulnerability resides at the root - if the main database account is compromised than nothing will help. Of course in a real system we might consider at least several main users (for example frontend_db_user, backend_db_user).

    Read the article

  • What is wrong with this Asynchronus task?

    - by bluebrain
    the method onPostExecute simply was not executed, I have seen 16 at LogCat but I can not see 16 in LogCAT. I tried to debug it, it seemed that it goes to the first line of the class (package line) after return statement. private class Client extends AsyncTask<Integer, Void, Integer> { protected Integer doInBackground(Integer... params) { Log.e(TAG,10+""); try { socket = new Socket(target, port); Log.e(TAG,11+""); oos = new ObjectOutputStream(socket.getOutputStream()); Log.e(TAG,14+""); ois = new ObjectInputStream(socket.getInputStream()); Log.e(TAG,15+""); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } Log.e(TAG,16+""); return 1; } protected void onPostExecute(Integer result) { Log.e(TAG,13+""); try { Log.e(TAG,12+""); oos.writeUTF(key); Log.e(TAG,13+""); if (ois.readInt() == OKAY) { isConnected = true; Log.e(TAG,14+""); }else{ Log.e(TAG,15+""); isConnected = false; } } catch (IOException e) { e.printStackTrace(); isClosed = true; } } }

    Read the article

  • How would I construct a terminal command to download a folder with wget from a Media Temple (gs) ser

    - by racl101
    I'm trying to download a folder using wget on the Terminal (I'm usin a Mac if that matters) because my ftp client sucks and keeps timing out. It doesn't stay connected for long. So I was wondering if I could use wget to connect via ftp protocol to the server to download the directory in question. I have searched around in the internet for this and have attempted to write the command but it keeps failing. So assuming the following: ftp username is: [email protected] ftp host is: ftp.s12345.gridserver.com ftp password is: somepassword I have tried to write the command in the following ways: wget -r ftp://[email protected]:[email protected]/path/to/desired/folder/ wget -r ftp://serveradmin:[email protected]/path/to/desired/folder/ When I try the first way I get this error: Bad port number. When I try the second way I get a little further but I get this error: Resolving s12345.gridserver.com... 71.46.226.79 Connecting to s12345.gridserver.com|71.46.226.79|:21... connected. Logging in as serveradmin ... Login incorrect. What could I be doing wrong?

    Read the article

  • Need help with transferring data between MySQL db's using PHP

    - by JM4
    In one of the sites I manage, the client has decided to take on ACH/Bank Account administration where it was previously outsourced. As a result, the information submitted in our online form which used to simply store in a single database for processing now must sit in 'limbo' until the funds used for payment have been verified. My original plan is as follows: At the end of an enrollment, all form data is collected and stored in a single MySQL database. Our internal administrator will receive an email notification reminding him enrollments have taken place. He will process the ACH information collected and wait the 3-4 business days needed for payment to clear. Once the payment information has been returned as Good (haven't considered what I will do with the 'bad' yet), the administrator can log into a secure portal which allows him to click a button to 'process' the full information once compared and verified. the process is simplified as: Enrollment complete: data stored in DB 'A' Funds verified and link clicked: data from 'A' is copied to DB 'B' and 'A' is deleted. I have run similar processes with CSV output before and simply used //transfers old data to archive $transfer = mysql_query('INSERT INTO '.$archive.' SELECT * FROM '.$table) or die(mysql_error()); //empties existing table $query = mysql_query('TRUNCATE TABLE '.$table) or die(mysql_error()); but in those cases, ALL data returned was copied and deleted. I only want to copy and delete a single record. Any idea how to accomplish this?

    Read the article

  • Dynamic Views based on view models

    - by Joe
    I have an asp.net mvc 2 app. I need to display the same page to each user. But each user has different rights to the data. IE some can see but not edit some data, some cannot edit nor see the data. Ideally data that cannot be seen nor edited is whitespace on the view. For security reasons I want my viewmodels to be sparse as possible. By that I mean if a field cannot be seen nor edited , that field should not be on the viewmodel. Obviously I can write view for each view model but that seems wasteful. So here is my idea/wishlist Can I decorate the viewmodel with attributes and hook into a pre render event of the html helpers and tell it to do &nbsp; instead??? Can I have the html helpers output &nbsp; for entries not found on the viewmodel?? or can I easily convert a view built into code then programaticlly build the markup and then put into the render engine to be processed and viewd as html on client side??

    Read the article

  • Redirect after refreshing update panel

    - by teebot.be
    Hello, Do you think it's possible to refresh an update panel and immediately after redirecting the response (for instance a download) ? I tried this: an invisible button - as an asyncpostbacktrigger a download button - when it is clicked the onclientclick clicks the invisible button the click event on the invisible button refreshes the update panel then the download button click event launches the download (normal postback which launches the download) However for some reason when the invisible button is clicked by the download button client script, it doesn't refresh the update panel.. Do you have an idea why it doesn't work? Or do you have other and cleaner techniques? Here's how the elements are declared: <asp:Button runat="server" ID="ButtonInvisible" Text="" Click="RefreshDisplay" /> <asp:Button runat="server" ID="ButtonDownload" Text="Download" OnClientClick="clickInvisible(this.id)" Click="Download" /><Triggers> <asp:AsyncPostBackTrigger ControlID="ButtonInvisible" /></Triggers> //the javascript <script type="text/javascript" language="javascript"> function clickInvisible(idButton) { document.getElementById('ButtonInvisible').click(); }</script> ' //the methods Download(object source, EventArgs e){Response.Redirect("test.txt")} RefreshDisplay(object source, EventArgs e){ ButtonCancel.Enabled = false;}

    Read the article

< Previous Page | 760 761 762 763 764 765 766 767 768 769 770 771  | Next Page >