Search Results

Search found 20816 results on 833 pages for 'vsphere client'.

Page 742/833 | < Previous Page | 738 739 740 741 742 743 744 745 746 747 748 749  | Next Page >

  • CSS Design question, I've got myself completely turned around.

    - by Matt Dawdy
    Okay, I have a couple of other questions out there, but I think I'd better just ask from the beginning how you CSS experts would do this. Client's page is split into 2 rows -- header has some info, some aligned to left of page, some to right, some in the middle. This is currently done using a table. I'm fine with leaving this alone or changing it. My real question is that I need a page layout to handle the following: 2 columns - column on left is 200px, but can be "close" down to to 10px (not a slider, it's either 200 or 10 px). The column on the right needs to be as big as it needs to be -- which might be larger than the width of the page. When left column is "closed" then the right column slides over of course. Again, this right column might be 300px or it might be 4000 pixels (it's a reporting interface). Now, to add another wrinkle, SOME pages have 3 columns. The first 2 columns are each exactly 200px, and both can be "closed" down to 10 px each. But, the user may not close both columns, maybe just 1. Or none. Or both. The third column needs to act just like I described above, being able to be larger than the page width, and sliding over to take advantage of any of the "closed" left columns. Whew! I'm pretty confused as to how to go about this, as either I get it right but I can't scroll over to the right at all (overflow: hidden) and information is lost, or the right column jumps down below the left 2 columns and just looks plain stupid. My minimum browser requirements are IE8, FF3.5, Chrome and Safari (latest versions of all). Any and all pointers are gladly accepted.

    Read the article

  • How should I design my MYSQL table/s?

    - by yaya3
    I built a really basic php/mysql site for an architect that uses one 'projects' table. The website showcases various projects that he has worked on. Each project contained one piece of text and one series of images. Original projects table (create syntax): CREATE TABLE `projects` ( `project_id` int(11) NOT NULL auto_increment, `project_name` text, `project_text` text, `image_filenames` text, `image_folder` text, `project_pdf` text, PRIMARY KEY (`project_id`) ) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1; The client now requires the following, and I'm not sure how to handle the expansions in my DB. My suspicion is that I will need an additional table. Each project now have 'pages'. Pages either contain... One image One "piece" of text One image and one piece of text. Each page could use one of three layouts. As each project does not currently have more than 4 pieces of text (a very risky assumption) I have expanded the original table to accommodate everything. New projects table attempt (create syntax): CREATE TABLE `projects` ( `project_id` int(11) NOT NULL AUTO_INCREMENT, `project_name` text, `project_pdf` text, `project_image_folder` text, `project_img_filenames` text, `pages_with_text` text, `pages_without_img` text, `pages_layout_type` text, `pages_title` text, `page_text_a` text, `page_text_b` text, `page_text_c` text, `page_text_d` text, PRIMARY KEY (`project_id`) ) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1; In trying to learn more about MYSQL table structuring I have just read an intro to normalization and A Simple Guide to Five Normal Forms in Relational Database Theory. I'm going to keep reading! Thanks in advance

    Read the article

  • I have a problem with a4j:commandButton and a reredering ...

    - by ollie314
    The code below show whiche thing that is failing in my appliaction. It is a quick add information form, fill out the form and submit it add a new entry into a database and synchronize my a databasle.. This is all done with ajax. The first form submission happens correctly but the second one fails to run the desired ActionListener. The second form is posting to the server though but the saveAction isn't invoke. As you will see, I'm a real beginner with this technologies ... If someone see the problem, it will be very helpfull !! <rich:simpleTogglePanel id="quickaddActivitySimpleToogle" switchType="client" opened="false" label="#{lang.activityModule_quickAdd_panelTitle}"> <p><a4j:form id="quickAddForm"> <h:outputLabel for="activityNameInput" value="#{lang.activity_name_dp}" /> <h:inputText id="activityNameInput" value="#{activityController.quickActivityAdd.name}"> </h:inputText> <rich:spacer width="20px" /> <h:inputHidden id="activityInternalNameInput" value="#{activityController.quickActivityAdd.internalName}" /> <rich:spacer width="20px" /> <a4j:commandButton id="activityQuickAddFormSubmitBtn" reRender="activityListTable,quickAddForm" actionListener="#{activityController.saveActivity}" value="#{lang.saveBtn_header}" /> </a4j:form></p></rich:simpleTogglePanel> Thanks in advanced. ollie314

    Read the article

  • Simple multilingual CMS?

    - by Christoffer
    Hi, I have been searching for a while now for a dead simple CMS with multi-language support. The ideal candidate is very lean and offers the possibility to set up different languages for different domains. It's OK if the language support is provided by a plugin/extension. For example I want example.com to point to English and example.fr should be French. With different URI-mappings for SEO. It can be developed in either of PHP, Ruby or Python and has to be open source. Any tips? Thank you EDIT / MORE DETAILS What I want is a CMS that is as simple to use and grasp for a client as Radiant is, but with tabs on each resource that can translate articles to different languages. Languages have to be able to use multiple domains, one for each language. I want to easily use the same article for more than one language as well as have articles (e.g. blog posts or news stories) that are only connected to one language. The CMS should be very light in core functionality (like Radiant, unlike Drupal/Joomla) but be easily extendable with plugins.

    Read the article

  • .NET Remoting Connecting to Wrong Host

    - by Dark Falcon
    I have an application I wrote which has been running well for 4 years. Yesterday they moved all their servers around and installed about 60 pending Windows updates, and now it is broken. The application makes use of remoting to update some information on another server (10.0.5.230), but when I try to create my remote object, I get the following exception: Note that it is trying to connect to 127.0.0.1, not the proper server. The server (10.0.5.230) is listening on port 9091 as it should. This same error is happening on all three terminal servers where this application is installed. Here is the code which registers the remoted object: public static void RegisterClient() { string lServer; RegistryKey lKey = Registry.CurrentUser.OpenSubKey("SOFTWARE\\Shoreline Teleworks\\ShoreWare Client"); if (lKey == null) throw new InvalidOperationException("Could not find Shoretel Call Manager"); object lVal = lKey.GetValue("Server"); if (lVal == null) throw new InvalidOperationException("Shoretel Call Manager did not specify a server name"); lServer = lVal.ToString(); IDictionary props = new Hashtable(); props["port"] = 0; string s = System.Guid.NewGuid().ToString(); props["name"] = s; ChannelServices.RegisterChannel(new TcpClientChannel(props, null), false); RemotingConfiguration.RegisterActivatedClientType(typeof(UpdateClient), "tcp://" + lServer + ":" + S_REMOTING_PORT + "/"); RemotingConfiguration.RegisterActivatedClientType(typeof(Playback), "tcp://" + lServer + ":" + S_REMOTING_PORT + "/"); } Here is the code which calls the remoted object: UpdateClient lUpdater = new UpdateClient(Settings.CurrentSettings.Extension.ToString()); lUpdater.SetAgentState(false); I have verified that the following URI is passed to RegisterActivatedClientType: "tcp://10.0.5.230:9091/" Why does this application try to connect to the wrong server?

    Read the article

  • How to host WCF service and TCP server on same socket?

    - by Ole Jak
    Tooday I use ServiceHost for self hosting WCF cervices. I want to host near to my WCF services my own TCP programm for direct sockets operations (like lien to some sort of broadcasting TCP stream) I need control over namespaces (so I would be able to let my clients to send TCP streams directly into my service using some nice URLs like example.com:port/myserver/stream?id=1 or example.com:port/myserver/stream?id=anything and so that I will not be bothered with Idea of 1 client for 1 socket at one time moment, I realy want to keep my WCF services on the same port as my own server or what it is so to be able to call www.example.com:port/myWCF/stream?id=222...) Can any body please help me with this? I am using just WCF now. And I do not enjoy how it works. That is one of many resons why I want to start migration to clear TCP=) I can not use net-tcp binding or any sort of other cool WS-* binding (tooday I use the simpliest one so that my clients like Flash, AJAX, etc connect to me with ease). I needed Fast and easy in implemrnting connection protocol like one I created fore use with Sockets for real time hi ammount of data transfering. So.. Any Ideas? Please - I need help.

    Read the article

  • Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to

    - by Simon B.
    I have a folder with ~10 000 subfolders. Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to setup inotify for each folder? (i.e. I loose if I want to do this for 10k+ folders). I guess yes, since I've already seen examples of this inefficient method, for instance http://twistedmatrix.com/trac/browser/trunk/twisted/internet/inotify.py?rev=28866#L345 My problem: I need to keep folders time-sorted with most recently active "project" up top. When a file changes, each folder above that file should update its last-modified timestamp to match the file. Delays are ok. Opening a file (typically MS Excel) and closing again, its file date can jump up and then down again. For this reason I need to wait until after a file is closed, then queue the folder of that file for checking, and only a while later do I go and look for the newest file in its folder, since the filedate of the triggering file could already be back-dated to its original timestamp by Excel or similar programs. Also in case several files from same folder are used/created, it makes sense to buffer timestamping of that folders' parents to at least get a bunch of updates collapsed into one delayed update. I'm looking for a linux solution. I have some code that can be run on a windows server, most of the queing functionality is here: http://github.com/sesam/FolderdateFollowsFiles/blob/master/FolderdateFollowsFiles/Follower.vb Available API:s The relative of inotify on windows, ReadDirectoryChangesW, can watch a folder and its whole subtree; see bWatchSubtree on http://msdn.microsoft.com/en-us/library/aa365465(VS.85).aspx Samba? Patching samba source is a possibility, but perhaps there are already hooks available? Other possibilities, like client side (various windows versions) and spying on file activities in order to update folders recursively?

    Read the article

  • how can I solve a problem with andWhere with symfony/doctrine and odbc?

    - by JaSk
    While following the symfony tutorial (1.4.4) I'm getting an error with ODBC/mssql 2008. SQLSTATE[07002]: COUNT field incorrect: 0 [Microsoft][SQL Server Native Client 10.0]COUNT field incorrect or syntax error (SQLExecute[0] at ext\pdo_odbc\odbc_stmt.c:254). Failing Query: "SELECT [j].[id] AS [j__id], [j].[category_id] AS [j__category_id], [j].[type] AS [j_type], [j].[company] AS [j_company], [j].[logo] AS [j_logo], [j].[url] AS [j_url], [j].[position] AS [j_position], [j].[location] AS [j_location], [j].[description] AS [j__description], [j].[how_to_apply] AS [j__how_to_apply], [j].[token] AS [j__token], [j].[is_public] AS [j__is_public], [j].[is_activated] AS [j__is_activated], [j].[email] AS [j__email], [j].[expires_at] AS [j__expires_at], [j].[created_at] AS [j__created_at], [j].[updated_at] AS [j__updated_at] FROM [jobeet_job] [j] WHERE ([j].[category_id] = '2' AND [j].[expires_at] ?) ORDER BY [j].[expires_at] DESC" I've narrowed the problem to a line that uses parameters public function getActiveJobs(Doctrine_Query $q = null) { if (is_null($q)) { $q = Doctrine_Query::create() -from('JobeetJob j'); } //$q->andWhere('j.expires_at > \''.date('Y-m-d H:i:s', time()).'\'');<-- this works $q->andWhere('j.expires_at > ?', date('Y-m-d H:i:s', time())); //<-- this line has problem $q->addOrderBy('j.expires_at DESC'); return $q->execute(); } can anyone point me in the right direction? Thanks.

    Read the article

  • Global.asax parser errors when deploying MVC 1 application to remote server.

    - by mannish
    So we're having some issues deploying an ASP.NET MVC app to a client site. Basically when we try to test the app from localhost, we get the dreaded Global.asax parser error indicating it could not load the application global. Research indicates there are basically 4 possible reasons for this exception we're seeing: The solution hasn't been built. This clearly isn't the case since we can deploy it here and it runs fine on any machine we deploy to AND we had to build and publish the darn thing to deploy it anyway. The Global.asax namespace inheritance does not match the application global code file. Again we double checked this and since it runs just fine here that can't be the issue. Miscellaneous non-descript IIS/VS.NET mischief. Basically something get's wonky in IIS or VS.NET and the web server won't behave correctly for this application. We've done cleans and rebuilds, we've deleted virtual dir and recreated, and performed all of the IIS munging that we've found elsewhere online. Various combinations of IIS bounces, server reboots, virtual dir/application recreation, etc. Code level permissions issue. We've verified full trust in machine/web config in the framework directory, we've set .NET trust to full in IIS, we've granted Everyone full control on the directories just to hit it with the security hammer, etc. etc. The pertinent detials: Windows Server 2008 x64 IIS 7, 32 bit compatible app pool (app was written on 32 bit OS compiled for any cpu) App pool identity set to NetworkService Microsoft ASP.NET MVC 1.0 XCopy deployment We deployed another read-only app just fine. The significant difference in this app is the use of NHibernate and Log4Net which require full trust. Additionally, the actual project name of the web project differs from the default namespace however the Inherits namespace in Global.asax and the Global.asax.cs files match so this shouldn't be an issue. Anybody have any bright ideas? We're officially down to just the dim ones.

    Read the article

  • TFS Folders - Getting them to work like Subversion "Trunk/Tags/Branches"

    - by Sam Schutte
    I recently started using Team Foundation Server, and am having some trouble getting it to work the way I want it to. I've used Subversion for a couple years now, and love the way it works. I always set up three folders under each project, Trunk, Tags, and Branches. When I'm working on a project, all my code lives under a folder called "C:\dev\projectname". This "projectname" folder can be made to point to either trunk, or any of the branches or tags using Subversion (with the switch command). Now that I'm using TFS (my client's system), I'd like things to work the same way. I created a "Trunk" folder with my project in it, and mapped "Project/Trunk/Website" to "c:\dev\Website". Now, I want to make a release under the "tags" folder (located in "Project/Tags/Version 1.0/Website", and TFS is giving me the following error when I execute the branch command: "No appropriate mapping exists for $Project/tags/Version 1.0/Website" From what I can find on the internet, TFS expects you to have a mapping to your hard drive at the root of the project (the "Project" folder in my case), and then have all the source code that lives in trunk, tags and branches all pulled down to your hard drive. This sucks because it requires way too much stuff on your hard drive, and even worse, when you are working in a solution in Visual Studio, you won't be able to pull down "Version 2.0" and have all your project references to other projects work, because they'll all be pointing to "trunk" folders under the main folder, not just the main folder itself. What I want to do is have the root "Project/Website" folder on my hard drive, and be able to have it point to (mapped to) either tags, branches, or trunk, depending on what i'm doing, without having to screw around with fixing Visual Studio project references. Ideas?

    Read the article

  • .NET proxy detection

    - by Ziplin
    I am having an issue with .NET detecting the proxy settings configured through internet explorer. I'm writing a client application that supports proxies, and to test I set up an array of 9 squid servers to support various authentication methods for HTTP and HTTPs. I have a script that updates IE to whichever configuration I choose (which proxy, detection via "Auto", PAC, or hardcode). I have tried the 3 methods below to detect the IE configuration through .NET. On occassion I notice that .NET picks up the wrong set of proxy servers. IE has the correct settings, and if I browse the web with IE, I can see I am hitting the correct servers via wireshark. WebRequest.GetSystemWebProxy().GetProxy(destination); GlobalProxySelection.Select.GetProxy(destination); WebRequest.DefaultWebProxy Here are the following tips I have: My script sets a PAC file on a webserver, and updates the configuration in IE, then clears IE's cache .NET seems to get "stuck" on a certain proxy configuration, and I have to set another configuration for .NET to realize there was a change. Occasionally it seems to pick some random set of servers (I'm sure they're not random, just a set of servers I used once and are in some cached PAC file or something). As in, I will check the proxy for the destination "https://www.secure.com" and I may have IE configured for and thus expect to get "http://squidserver:18" and instead it will return "http://squidserver:28" (port 18 runs NTLM, 28 runs without authentication). All the squid servers work. This does not appear to be an issue on XP, only Vista, 2003, and windows 7. Hardcoding the proxy servers in IE ALWAYS works Time always solves the issue - if I leave the computer for about 20 or 30 minutes and come back, .NET picks up the correct proxy settings, as if a cached PAC script expired.

    Read the article

  • ODBC and NLS_LANG

    - by Michael S.
    Let's say that I've created two different program executables, e.g. in C++. For some reason, the two programs internals representation of text are different from each other. Let's say the first program is using text representation A and the other text representation B. It could be a specific 8-bit ANSI codepage, Unicode/UTF-8 or Unicode/UTF-16 or whatever. Now each program want to communicate text (add/retrieve data) to/from the same database table on a (database) server. Each program communicates with the database through ODBC. So the programs do not know what database system they they are communicating with. In this specific case through the database is actually a Oracle RDMS database and the database server administrator has setup the database to use UTF-8. On the system on which the programs are running an appropriate ODBC driver is available, so that the programs can connect through ODBC. Each program will treat and convert from the ODBC data type SQL_C_CHAR to its internal text representation appropriately. I assume that the programs cannot do no other than to assume a specific encoding returned for SQL_C_CHAR text. If not the programs has to be told which encoding that is. For Oracle, I know that the NLS_LANG environment variable can be used on the client. I assume it affects the ODBC driver (related to SQL_C_CHAR) to convert from a specific encoding (as given by NLS_LANG) to the internal encoding of the database (in this example UTF-8) and vice-versa. If the machine running my programs are having a NLS_LANG this setting will affect the byte sequences returned for SQL_C_CHAR so my programs cannot suddenly assume a specific encoding for the text returned via SQL_C_CHAR. Is it possible to setup the ODBC connection (preferably programmatically at runtime), so that it takes care of text conversions appropriately for the two programs, i.e. from/to representation to/from UTF-8 and from/to representation B to/from UTF-8? Regards, /Michael PS. As the programs are connecting through ODBC I don't think it would be nice that they should now anything about NLS_LANG as this is a Orcacle specific environment variable.

    Read the article

  • GUI toolkit for Unicode text app?

    - by wrp
    In developing a tool for processing text in exotic scripts, I'm having trouble choosing a GUI toolkit. The main part of the interface is to be a text editor, not much more elaborate than Notepad, but with its own input method editor. It is to be extensible in a scripting language so that non-programmers can develop their own input methods and display routines. It will be assumed that all files are UTF-8. More elaborate support like regexes is not needed. The main sticking points are: characters beyond the Basic Multilingual Plane right-to-left and bi-directional text extension in a scripting language cross-platform Linux/Windows/OS X My first choice was Tcl/Tk, but it lacks bidi and going beyond the BMP seems dodgy. At the other extreme, I've considered Qt with embedded ECMAScript, but that might be heavier and less malleable than I would like. I'm even thinking about making it browser based, but I'm concerned that the IM for large scripts would be too heavy for client-side processing. I've also looked at a few similar projects in Java, but the quality of the font rendering in SWING has been unacceptable. What are your experiences in handling Unicode with various toolkits? Are there other serious issues I haven't considered? What would you recommend for doing this in the lightest way?

    Read the article

  • Ajax long polling (comet) + php on lighttpd v1.4.22 multiple instances problem.

    - by fibonacci
    Hi, I am new to this site, so I really hope I will provide all the necessary information regarding my question. I've been trying to create a "new message arrived notification" using long polling. Currently I am initiating the polling request by window.onLoad event of each page in my site. On the server side I have an infinite loop: while(1){ if(NewMessageArrived($current_user))break; sleep(10); } echo $newMessageCount; On the client side I have the following (simplified) ajax functions: poll_new_messages(){ xmlhttp=GetXmlHttpObject(); //... xmlhttp.onreadystatechange=got_new_message_count; //... xmlhttp.send(); } got_new_message_count(){ if (xmlhttp.readyState==4){ updateMessageCount(xmlhttp.responseText); //... poll_new_messages(); } } The problem is that with each page load, the above loop starts again. The result is multiple infinite loops for each user that eventually make my server hang. *The NewMessageArived() function queries MySQL DB for new unread messages. *At the beginning of the php script I run start_session() in order to obtain the $current_user value. I am currently the only user of this site so it is easy for me to debug this behavior by writing time() to a file inside this loop. What I see is that the file is being written more often than once in 10 seconds, but it starts only when I go from page to page. Please let me know if any additional information might help. Thank you.

    Read the article

  • Can a stateless WCF service benefit from built-in database connection pooling?

    - by vladimir
    I understand that a typical .NET application that accesses a(n SQL Server) database doesn't have to do anything in particular in order to benefit from the connection pooling. Even if an application repeatedly opens and closes database connections, they do get pooled by the framework (assuming that things such as credentials do not change from call to call). My usage scenario seems to be a bit different. When my service gets instantiated, it opens a database connection once, does some work, closes the connection and returns the result. Then it gets torn down by the WCF, and the next incoming call creates a new instance of the service. In other words, my service gets instantiated per client call, as in [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]. The service accesses an SQL Server 2008 database. I'm using .NET framework 3.5 SP1. Does the connection pooling still work in this scenario, or I need to roll my own connection pool in form of a singleton or by some other means (IInstanceContextProvider?). I would rather avoid reinventing the wheel, if possible.

    Read the article

  • Reducing moire when downsampling halftone comic images.

    - by drawnonward
    How can I reduce moire effects when downsampling halftone comic book images during live zoom on an iPhone or iPad? I am writing a comic book viewer. It would be nice to provide higher resolution images and allow the user to zoom in while reading the comic book. However, my client is averse to moire effects and will not allow this feature if there are noticeable moire artifacts while zooming, which of course there are. Modifying the images to be less susceptible to moire would only work if the modifications were not perceptible. Blur was specifically prohibited, as is anything that removes the beloved halftone dots. The images are black and white halftone and line art. The originals are 600 dpi but what we ship with the application will be half that at best, so probably 2500 pixels or less tall. So what are my options? If I write a custom downsampling algorithm would it be fast enough for real time on these devices? Are there other tricks I can do? Would it work to just avoid the size ratios that have the most visual moire effects? As you zoom in an out, there are definitely peaks where the moire effects are worst. Is there a way to calculate what those points are and just zoom to a nearby scale that is not as bad? Any suggestions are welcome. I have very little experience with image and signal processing, but am enjoying the opportunity to learn. I know nothing of wavelets and acutance and other jargon, so please be verbose.

    Read the article

  • Starting a process in one HTTP call and getting results in another

    - by KillianDS
    Hi, I'm writing a very simple testing framework for my application, the design isn't perfect, but I don't have time to write something more complex. Essentially, I have a client and server-application, on my server I want a small python web server to start the server application with given test sequences on a GET or POST call. Also, the application prints some testdata to stderr which I'd like to catch and return in another HTTP call. At the moment I have this: from subprocess import Popen, PIPE from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer p = None class MyHandler(BaseHTTPRequestHandler): def do_GET(self): global p if self.path.endswith("start/"): p = Popen(["./bin/Release/simplex264","BBB-360","127.0.0.1"], stderr=PIPE) print 'started' return elif self.path.endswith("getResults/"): self.wfile.write(p.stderr.read()) return self.send_error(404,'File Not Found: %s' % self.path) def main(): try: server = HTTPServer(('localhost', 9876), MyHandler) print 'Started server...' server.serve_forever() except KeyboardInterrupt: print 'Shutting down...' server.socket.close() if __name__ == '__main__': main() Which 'works', except for one part, when I try to open http://localhost:9876/start/, it does not return before the process ended. However, the 'started' appears in my shell immediately (I added this because I thought the Popen call would only return after execution). I do not know the perfect inner workings of Popen and BaseHTTPRequestHandler however and do not really know where it goes wrong. Is there any way to make this work asynchronously?

    Read the article

  • Checkbox in an email

    - by Austin
    I am creating an email using the c# MailMessage and I am trying to add a checkbox that doesn't need to be clicked. The checkboxes will be used for a checklist of what to bring to an event (like a packing list). I have: MailMessage objEmail = new MailMessage(); objEmail.From = new MailAddress("[email protected]"); objEmail.To.Add(new MailAddress("[email protected]")); objEmail.CC.Add(new MailAddress("[email protected]")); objEmail.Bcc.Add(new MailAddress("[email protected]")); objEmail.Subject = "Packing list!"; objEmail.IsBodyHtml = true; objEmail.Body = @"<div width=""800px""> <h3>WHAT TO BRING</h3> <form> <input type=""checkbox"" name=""item"" value=""shirt"">Shirt<br> <input type=""checkbox"" name=""item"" value=""shoes"">Shoes </form></div>"; but when I send the email the checkboxes do not appear in the list. Output in outlook using outlook.com: WHAT TO BRING I have a bike I have a car Output in outlook using Microsoft Outlook: WHAT TO BRING [ ]I have a bike [ ]I have a car Output in outlook using hotmail.com: WHAT TO BRING I have a bike []I have a car So the problem is with the mail client but it is inconsistent what the problem is. I s there any way to make a consistent output? Is there a way with html that works to create the checkboxes or do I just need to include images of a checkbox? Thanks in advance.

    Read the article

  • Capturing window image in windows server 2008

    - by Sergey Osypchuk
    I am capturing output of windows program using following function: public static Bitmap Get(IntPtr hWnd, int X1, int Y1, int width, int height) { WINDOWINFO winInfo = new WINDOWINFO(); bool ret = GetWindowInfo(hWnd, ref winInfo); if (!ret) { return null; } int curheight = height; if (curheight <= 0 || curheight > winInfo.rcWindow.Height) curheight = winInfo.rcWindow.Height; int curwidth = width; if (curwidth <= 0 || curwidth > winInfo.rcWindow.Width) curwidth = winInfo.rcWindow.Width; if (curheight == 0 || curwidth == 0) return null; Graphics frmGraphics = Graphics.FromHwnd(hWnd); IntPtr hDC = GetWindowDC(hWnd); //gets the entire window //IntPtr hDC = frmGraphics.GetHdc(); -- gets the client area, no menu bars, etc.. System.Drawing.Bitmap tmpBitmap = new System.Drawing.Bitmap(curwidth, curheight, frmGraphics); Graphics bmGraphics = Graphics.FromImage(tmpBitmap); IntPtr bmHdc = bmGraphics.GetHdc(); BitBlt(bmHdc, 0, 0, curwidth, curheight, hDC, X1, Y1, TernaryRasterOperations.SRCCOPY); bmGraphics.ReleaseHdc(bmHdc); ReleaseDC(hWnd, hDC); return tmpBitmap; } On Development environment everything is excellent, but on windows server 2008 I have following issues: 1) When there is other window in front my - it is getting captured as well 2) When there is no user connected to RDC - image is black On other hand, I am able to render webpage images using IE. How I can change behaviour of windows rendering process to get proper results?

    Read the article

  • Has form post behavior changed in modern browsers? (or How are double clicks handled by the browser)

    - by Alex Czarto
    Background: We are in the process of writing a registration/payment page, and our philosophy was to code all validation and error checking on the server side first, and then add client side validation as a second step (un-obstructive jQuery). We wanted to disable double clicks server side, so we wrote some locking, thread-safe code to handle simultaneous posts/race conditions. When we tried to test this, we realized that we could not cause a simultaneous post or race condition to occur. I thought that (in older browsers anyway) double clicking a submit button worked as follows: User double clicks submit button. Browser sends a post on the first click On the second click, browser cancels/ignores initial post, and initiates a second post (before the first post has returned with a response). Browser waits for second post to return, ignoring initial post response. I thought that from the server side it looked like this: Server gets two simultaneous post requests, executes and responds to them both (unaware that no one is listening to the first response). From our testing (FireFox 3.0, IE 8.0) this is what actually happens: User double clicks submit button Browser sends a post for the first click Browser queues up second click, but waits for the response from the first click. Response returns from first click (response is ignored?). Browser sends a post for the second click. So from a server side: Server receives a single post which it executes and responds to. Then, server receives a second request wich it executes and responds to. My question is, has this always worked this way (and I'm losing my mind)? Or is this a new feature in modern browsers that prevents simultaneous posts to be sent to the server? It seems that for server side double click prevention, we don't have to worry about simultaneous posts or race conditions. Only need to worry about queued up posts. Thanks in advance for any feedback / comments. Alex

    Read the article

  • Configuring IHS server to direct traffic to the Netty component bound to a port

    - by rbot
    I have a Server Component ( based on Jboss-Netty, which could maintain & handle persistent connections ) deployed in WAS. This component when deployed & initiated within the WAS environment, binds to a port & listens for incoming HTTP connection. [ Why i had to deploy a Netty HTTP Server within WAS is another story - management requirement !! Netty is deployed in WAS as a spring bean which when initiated runs on a port in the machine, independent of WAS ] Clients (mobile app) were able to establish persistent HTTP connections (to the above URL::Port) with this netty component & send/receive requests. Now, I have to replicate this feature in our Production Environment where a IHS Server (Web Server) which sits before the WAS. What i expected is to get a IHS URL which could redirect the incoming packets to the specific PORT on WAS, so that the Client apps can establish a similar persistent http connection. Our Server Admin tried a few combinations and we are not able to identify how to proceed further on this. Your expert ideas would be highly appreciated.

    Read the article

  • pthread condition variables on Linux, odd behaviour.

    - by janesconference
    Hi. I'm synchronizing reader and writer processes on Linux. I have 0 or more process (the readers) that need to sleep until they are woken up, read a resource, go back to sleep and so on. Please note I don't know how many reader processes are up at any moment. I have one process (the writer) that writes on a resource, wakes up the readers and does its business until another resource is ready (in detail, I developed a no starve reader-writers solution, but that's not important). To implement the sleep / wake up mechanism I use a Posix condition value, pthread_cond_t. The clients call a pthread_cond_wait() on the variable to sleep, while the server does a pthread_cond_broadcast() to wake them all up. As the manual says, I surround these two calls with a lock/unlock of the associated pthread mutex. The condition variable and the mutex are initialized in the server and shared between processes through a shared memory area (because I'm not working with threads, but with separate processes) an I'm sure my kernel / syscall support it (because I checked _POSIX_THREAD_PROCESS_SHARED). What happens is that the first client process sleeps and wakes up perfectly. When I start the second process, it blocks on its pthread_cond_wait() and never wakes up, even if I'm sure (by the logs) that pthread_cond_broadcast() is called. If I kill the first process, and launch another one, it works perfectly. In other words, the condition variable pthread_cond_broadcast() seems to wake up only one process a time. If more than one process wait on the very same shared condition variable, only the first one manages to wake up correctly, while the others just seem to ignore the broadcast. Why this behaviour? If I send a pthread_cond_broadcast(), every waiting process should wake up, not just one (and, however, not always the same one).

    Read the article

  • How can "today's date" be varied for unit testing purposes?

    - by ck
    I use VS2008 targetting .NET 2.0 Framework, and, just in case, no I can't change this :) I have a DateCalculator class. Its method GetNextExpirationDate attempts to determine the next expiration, internally using DateTime.Today as a baseline date. As I was writing unit tests, I realized that I wanted to test GetNextExpirationDate for different 'today' dates. What's the best way to do this? Here are some alternatives I've considered: Expose a property/overloaded method with argument baselineDate and only use it from the unit test. In actual client code, disregard the property/overloaded method in favour of the method that defaults baselineDate to DateTime.Today. I'm reluctant to do this as it makes the public interface of the DateCalculator class awkward. Create a protected field called baselineDate that is internally set to DateTime.Today. When testing, derive a DateCalculatorForTesting from DateCalculator and set baslineDate via the constructor. It keeps the public interface clean, but still isn't great - baselineDate was made protected and a derived class is required, both solely for testing. Use extension methods. I tried this after adding the ExtensionAttribute, then realized it wouldn't work because extension methods can't access private/protected variables. I initially thought this was really quite an elegant solution. :( I'd be interested in hearing what others think.

    Read the article

  • "Downloading" a computed value form JavaScript

    - by Travis Jensen
    I'm hoping you can prove me wrong here (please, please, please! ;). I have a situation where I need to download encrypted data from a Server D (for "Data"). Server K (for "Key") has the encryption key. For security sake, I would really prefer that Server D never know the key that Server K knows. What I want is my client (e.g. your browser) to connect to Server D for the data and Server K for the key and doe the decryption locally so the unencrypted stuff never leaves your computer. I can do this fine for text areas in the dom by replacing the contents of the HTML. However, sometimes, I would like to do larger files that I stream to the file system. For instance, perhaps I want to encrypt a movie and decrypt it and stream the contents to the my video player. I am not a JavaScript guru by any stretch, especially when it comes to the edge cases of things like the security sandbox. For Small D, I can handle the decryption, but I don't know how to save the decrypted file. Large D seems problematic as memory runs out. Anybody have any ideas that don't involve native plugins? Thanks!

    Read the article

  • Problem when getting pageContent of an unavailable URL in Java

    - by tiendv
    I have a code for get pagecontent from a URL: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; public class GetPageFromURLAction extends Thread { public String stringPageContent; public String targerURL; public String getPageContent(String targetURL) throws IOException { String returnString=""; URL urlString = new URL(targetURL); URLConnection openConnection = urlString.openConnection(); String temp; BufferedReader in = new BufferedReader( newInputStreamReader(openConnection.getInputStream())); while ((temp = in.readLine()) != null) { returnString += temp + "\n"; } in.close(); // String nohtml = sb.toString().replaceAll("\\<.*?>",""); return returnString; } public String getStringPageContent() { return stringPageContent; } public void setStringPageContent(String stringPageContent) { this.stringPageContent = stringPageContent; } public String getTargerURL() { return targerURL; } public void setTargerURL(String targerURL) { this.targerURL = targerURL; } @Override public void run() { try { this.stringPageContent=this.getPageContent(targerURL); } catch (IOException e) { e.printStackTrace(); } } } Sometimes I receive an HTTP error of 405 or 403 and result string is null. I have tried checking permission to connect to the URL with: URLConnection openConnection = urlString.openConnection(); openConnection.getPermission() but it usualy returns null. Does mean that i don't have permission to access the link? I have tried stripping off the query portion of the URL with: String nohtml = sb.toString().replaceAll("\\<.*?>",""); where sb is a Stringbulder, but it doesn't seem to strip off the whole query substring. In an unrelated question, I'd like to use threads here because I must retrieve many URLs; how can I create a multi-thread client to improve the speed?

    Read the article

< Previous Page | 738 739 740 741 742 743 744 745 746 747 748 749  | Next Page >