Search Results

Search found 14961 results on 599 pages for 'mac clients'.

Page 529/599 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (-1)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for concurrent connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR) suggests that ...'the connection was reset by peer', or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? Otherwise, use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. I cannot figure-out whatever is wrong even with Valgrind or GDB

    Read the article

  • How to download a file from a UNC mapped share via IIS and ASP

    - by helgeg
    I am writing an ASP application that will serve files to clients through the browser. The files are located on a file server that is available from the machine IIS is running on via a UNC path (\server\some\path). I want to use something like the code below to serve the file. Serving files that are local to the machine IIS is running on is working well with this method, my trouble is being able to serve files from the UNC mapped share: //Set the appropriate ContentType. Response.ContentType = "Application/pdf"; //Get the physical path to the file. string FilePath = MapPath("acrobat.pdf"); //Write the file directly to the HTTP content output stream. Response.WriteFile(FilePath); Response.End(); My question is how I can specify a UNC path for the file name. Also, to access the file share I need to connect with a specific username/password. I would appreciate some pointers on how I can achieve this (either using the approach above or by other means).

    Read the article

  • sybase - values from one table that aren't on another, on opposite ends of a 3-table join

    - by Lazy Bob
    Hypothetical situation: I work for a custom sign-making company, and some of our clients have submitted more sign designs than they're currently using. I want to know what signs have never been used. 3 tables involved: table A - signs for a company sign_pk(unique) | company_pk | sign_description 1 --------------------1 ---------------- small 2 --------------------1 ---------------- large 3 --------------------2 ---------------- medium 4 --------------------2 ---------------- jumbo 5 --------------------3 ---------------- banner table B - company locations company_pk | company_location(unique) 1 ------|------ 987 1 ------|------ 876 2 ------|------ 456 2 ------|------ 123 table C - signs at locations (it's a bit of a stretch, but each row can have 2 signs, and it's a one to many relationship from company location to signs at locations) company_location | front_sign | back_sign 987 ------------ 1 ------------ 2 987 ------------ 2 ------------ 1 876 ------------ 2 ------------ 1 456 ------------ 3 ------------ 4 123 ------------ 4 ------------ 3 So, a.company_pk = b.company_pk and b.company_location = c.company_location. What I want to try and find is how to query and get back that sign_pk 5 isn't at any location. Querying each sign_pk against all of the front_sign and back_sign values is a little impractical, since all the tables have millions of rows. Table a is indexed on sign_pk and company_pk, table b on both fields, and table c only on company locations. The way I'm trying to write it is along the lines of "each sign belongs to a company, so find the signs that are not the front or back sign at any of the locations that belong to the company tied to that sign." My original plan was: Select a.sign_pk from a, b, c where a.company_pk = b.company_pk and b.company_location = c.company_location and a.sign_pk *= c.front_sign group by a.sign_pk having count(c.front_sign) = 0 just to do the front sign, and then repeat for the back, but that won't run because c is an inner member of an outer join, and also in an inner join. This whole thing is fairly convoluted, but if anyone can make sense of it, I'll be your best friend.

    Read the article

  • Is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in Apache module: 1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any. So can a Apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • connecting to secure database from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. The option of switching the database onto public hosting is not an option, so I have to work with the sdcenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • Update data table on ASMX Service side

    - by Paul
    Hi, I need advice. On Web Service side a I hav this method : public DataSet GetDs(string id) { SqlConnection conn = null; SqlDataAdapter da = null; DataSet ds; try { string sql = "SELECT * FROM Tab1"; string connStr = WebConfigurationManager.ConnectionStrings["Employees"].ConnectionString; conn = new SqlConnection(connStr); conn.Open(); da = new SqlDataAdapter(sql, conn); ds = new DataSet(); da.Fill(ds, "Tab1"); return ds; } catch (Exception ex) { throw ex; } finally { if (conn != null) conn.Close(); if (da != null) da.Dispose(); } } It return dataset to client app. I client application is dataset binding in datagridview. Client can insert,update,delete row from table. If client finish his work, I want accept change in data table on web service side. I can sent clients all dataset na update table on web service side, but I want sent only changed data. Any advice? Thank u.

    Read the article

  • Android: CustomListAdapter

    - by primal
    Hi, I have implemented a custom list view which looks like the twitter timeline. adapter = new MyClickableListAdapter(this, R.layout.timeline, mObjectList); setListAdapter(adapter); The constructor for MyClickableListAdapter is as follows private class MyClickableListAdapter extends ClickableListAdapter{ public MyClickableListAdapter(Context context, int viewId, List objects) { super(context, viewId, objects); } ClickableListAdapter extends BaseAdapter and implements the necessary methods. The xml code for the list view is as follows <ListView android:id="@+id/android:list" android:layout_width="fill_parent" android:layout_height="wrap_content" /> This is what it looks like. I have 3 questions 1) I tried registering a context menu for the list view by adding the line after setting the list adapter registerforContextMenu(getListView()); But on long-click the menu doesnt get displayed. I cannot understand what I am doing wrong! 2) Is it possible to display a textview above the listview? I tried it by adding the code for textview above the listview. But then, only the textview gets displayed. 3) I have seen in many twitter clients that on clicking post a window pops up from the top covering only some portion of the screen and rest of the timeline is visible. How can this be done possibly without starting a new activity? Any help would be much appreciated..

    Read the article

  • Is is possible to determine a password input string as plaintext or hashed?

    - by Godders
    I have a RESTful API containing a URI of /UserService/Register. /UserService/Register takes an XML request such as: <UserRegistrationRequest> <Password>password</Password> <Profile> <User> <UserName>username</UserName> </User> </Profile> </UserRegistrationRequest> I have the following questions given the above scenario: Is there a way (using C# and .Net 3.5+) of enforcing/validating that clients calling Register are passing a hashed password rather than plaintext? Is leaving the choice of hashing algorithm to be used to the client a good idea? We could provide a second URI of /UserService/ComputePasswordHash which the client would call before calling /UserService/Register. This has the benefit of ensuring that each password is hashed using the same algorithm. Is there a mechanism within REST to ensure that a client has called one URI before calling another? Hope I've explained myself ok. Many thanks in advance for any help.

    Read the article

  • Using Everyauth/Express and Multiple Configurations?

    - by Zane Claes
    I'm successfully using Node.js + Express + Everyauth ( https://github.com/abelmartin/Express-And-Everyauth/blob/master/app.js ) to login to Facebook, Twitter, etc. from my application. The problem I'm trying to wrap my head around is that Everyauth seems to be "configure and forget." I set up a single everyauth object and configure it to act as middleware for express, and then forget about it. For example, if I want to create a mobile Facebook login I do: var app = express.createServer(); everyauth.facebook .appId('AAAA') .appSecret('BBBB') .entryPath('/login/facebook') .callbackPath('/callback/facebook') .mobile(true); // mobile! app.use(everyauth.middleware()); everyauth.helpExpress(app); app.listen(8000); Here's the problem: Both mobile and non-mobile clients will connect to my server, and I don't know which is connecting until the connection is made. Even worse, I need to support multiple Facebook app IDs (and, again, I don't know which one I will want to use until the client connects and I partially parse the input). Because everyauth is a singleton which in configured once, I cannot see how to make these changes to the configuration based upon the request that is made. What it seems like is that I need to create some sort of middleware which acts before the everyauth middleware to configure the everyauth object, such that everyauth subsequently uses the correct appId/appSecret/mobile parameters. I have no clue how to go about this... Suggestions? Here's the best idea I have so far, though it seems terrible: Create an everyauth object for every possible configuration using a different entryPath for each...

    Read the article

  • Cross-domain data access in JavaScript

    - by vit
    We have an ASP.Net application hosted on our network and exposed to a specific client. This client wants to be able to import data from their own server into our application. The data is retrieved with an HTTP request and is CSV formatted. The problem is that they do not want to expose their server to our network and are requesting the import to be done on the client side (all clients are from the same network as their server). So, what needs to be done is: They request an import page from our server The client script on the page issues a request to their server to get CSV formatted data The data is sent back to our application This is not a challenge when both servers are on the same domain: a simple hidden iframe or something similar will do the trick, but here what I'm getting is a cross-domain "access denied" error. They also refuse to change the data format to return JSON or XML formatted data. What I tried and learned so far is: Hidden iframe -- "access denied" XMLHttpRequest -- behaviour depends on the browser security settings: may work, may work while nagging a user with security warnings, or may not work at all Dynamic script tags -- would have worked if they could have returned data in JSON format IE client data binding -- the same "access denied" error Is there anything else I can try before giving up and saying that it will not be possible without exposing their server to our application, changing their data format or changing their browser security settings? (DNS trick is not an option, by the way).

    Read the article

  • how to make a software and preserve database integrity and correctness and please help confused

    - by user287745
    i have made an application project in vs 08 c#, sql server from vs 08. the database has like 20 tables and many fields in each have made an interface for adding deleting editting and retrieving data according to predefined needs of the users. now i have to 1) make to project in to a software which i can deliver to professor. that is he can just double click the icon and the software simply starts. no vs 08 needed to start the debugging 2) the database will be on one powerful computer (dual core latest everything win xp) and the user will access it from another computer connected using LAN i am able to change the connection string to the shared database using vs 08/ debugger whenever the server changes but how am i supposed to do that when its a software? 3)there will by many clients am i supposed to give the same software to every one, so they all can connect to the database, how will the integrity and correctness of the database be maintained? i mean the db.mdf file will be in a folder which will be shared with read and write access. so its not necessary that only one user will write at a time. so is there any coding for this or? please help me out here i am stuck do not know what to do i have no practical experience, would appreciate all the help thank you

    Read the article

  • connecting to secure database on private network from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. [edit] the webservice will also be available to the retail website in order for it to lookup product info as well as allocate stock transfers to the same sqlserver db. it will therefore be located on the same domain as the retail site The option of switching the database onto public hosting is not feasible, so I have to work with the scenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • PHP rewrite to included file - is this a valid script?

    - by Poni
    Hi all! I've made this question: http://stackoverflow.com/questions/2921469/php-mutual-exclusion-mutex As said there, I want several sources to send their stats once in a while, and these stats will be showed at the website's main page. My problem is that I want this to be done in an atomic manner, so no update of the stats will overlap another one running in the background. Now, I came up with this solution and I want you PHP experts to judge it. stats.php <?php define("my_counter", 12); ?> index.php <?php include "stats.php"; echo constant("my_counter"); ?> update.php <?php $old_error_reporting = error_reporting(0); include "stats.php"; define("my_stats_template",' <?php define("my_counter", %d); ?> '); $fd = fopen("stats.php", "w+"); if($fd) { if (flock($fd, LOCK_EX)) { $my_counter = 0; try { $my_counter = constant("my_counter"); } catch(Exception $e) { } $my_counter++; $new_stats = sprintf(constant("my_stats_template"), $my_counter); echo "Counter should stand at $my_counter"; fwrite($fd, $new_stats); } flock($fd, LOCK_UN); fclose($fd); } error_reporting($old_error_reporting); ?> Several clients will call the "update.php" file once every 60sec each. The "index.php" is going to use the "stats.php" file all the time as you can see. What's your opinion?

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • C++ Builder 2010 Exception Dead Lock ???

    - by James
    Hello Is this some kind of exception dead lock i am facing? How to avoid it ? Have a look at below line where i have TIdContext objects of connected clients stored in an objlist and at times i need to process it. But if one user is disconnected while another thread is processing the list, then for that freed TIdContext-Data object I am getting Access voilation, Ok its fine i am using try/catch but problem is that at below line there is some kind of dead lock and process hangs , if i attach a debuger it show Access voilation Again and Again and Again, and cpu coonsumption goes up because of that exception dead lock. AnsiString UserID = ((Tmyobject*) ((TIdContext*) ObjList->Objects[i])->Data)->UserID; i know i can check before accessing the object, if object is not Null, It works.. But my question is what if once in a blue moon the Data object is freed at the point when NULL check is performed and on next line when again i am accessing the object i get same dead lock ??? So how to avoid/handle this dead lock exception ? Here is the call stack... :005F07C0 System::AnsiStringBase::AnsiStringBase(this=:0285FCE0, src=????) :0040223F System::AnsiStringT<0>::AnsiStringT<0>(this=:0285FCE0, src=:00000008) :00457996 TSomeClass::SomeFunction(this=:009D8230, UserID={ }, DataSize={ }, ) :0047BFF1 __linkproc__ ThreadProc(Thread=:009561C0) :004AD00E __linkproc__ ThreadWrapper(Parameter=:009EAA30) :7c80b729 ; C:\WINDOWS\system32\kernel32.dll Please helppppppppppppppppppppp Thanks

    Read the article

  • Client-server synchronization pattern / algorithm?

    - by tm_lv
    I have a feeling that there must be client-server synchronization patterns out there. But i totally failed to google up one. Situation is quite simple - server is the central node, that multiple clients connect to and manipulate same data. Data can be split in atoms, in case of conflict, whatever is on server, has priority (to avoid getting user into conflict solving). Partial synchronization is preferred due to potentially large amounts of data. Are there any patterns / good practices for such situation, or if you don't know of any - what would be your approach? Below is how i now think to solve it: Parallel to data, a modification journal will be held, having all transactions timestamped. When client connects, it receives all changes since last check, in consolidated form (server goes through lists and removes additions that are followed by deletions, merges updates for each atom, etc.). Et voila, we are up to date. Alternative would be keeping modification date for each record, and instead of performing data deletes, just mark them as deleted. Any thoughts?

    Read the article

  • In which domains are message oriented middleware like AMQP useful?

    - by cocotwo
    What problem do MOM (Message Oriented Middleware) solve? Scalability? Integration? In which domain are they typically used and in which domains are they typically not used? For example, say, is Google using such solution for it's main search engine or to power GMail? What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there? Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers? Does it make sense for desktop apps that need to communicate with a server? Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another? I'm a bit confused as to what they're useful for and I think that with example of where they're appropriate and where they're not appropriate I could better understand their use.

    Read the article

  • ASP.NET - What is the best way to block the application usage?

    - by Tufo
    Our clients must pay a monthly Fee... if they don't, what is the best way to block the asp.net software usage? Note: The application runs on the client own server, its not a SaaS app... My ideas are: Idea: Host a Web Service on the internet that the application will use to know if the client can use the software. Issue 1 - What happen if the client internet fails? Or the data center fails? Possible Answer: Make each web service access to send a key that is valid for 7 or 15 days, so each web service consult will enable the software to run more 7 or 15 days, this way the application will only be locked after 7 or 15 days without consulting our web servicee. Issue 2 - And if the client don't have or don't want to enable internet access to the application? Idea 2: Send a key monthly to the client. Issue - How to make a offline key? Possible Answer: Generate a Hash using the "limit" date, so each login try on software will compare the today hash with the key? Issue 2 - Where to store the key? Possible Answer: Database (not good, too easy to change), text file, registry, code file, assembly... Any opinion will be very appreciated!

    Read the article

  • How many WCF connections can a single host handle?

    - by mafutrct
    I'll try to explain this with an example. I'm writing a chat application. There are users that can join chat rooms. A user has to log in before he can join any room. Currently, there is a single service. A user logs in using this service. Then, the user sends and receives messages for all joined rooms via this single service. channel.Login("Hans Moleman", "password"); channel.JoinRoom("name of room"); channel.SendChat("name of room", "hello"); I'm thinking about changing the design so there is a new WCF connection for each joined room. In the actual app, the number of connections is likely going to be in the range of 10-100, possibly more. Is this a good idea? Or are ~100 connections per client too much? The server should be able to handle many clients (range 100-1000, later up to 10k). In case it matters, I'm using NetTcpBinding.

    Read the article

  • Design pattern to integrate Rails with a Comet server

    - by empire29
    I have a Ruby on Rails (2.3.5) application and an APE (Ajax Push Engine) server. When records are created within the Rails application, i need to push the new record out on applicable channels to the APE server. Records can be created in the rails app by the traditional path through the controller's create action, or it can be created by several event machines that are constantly monitoring various inputstream and creating records when they see data that meets a certain criteria. It seems to me that the best/right place to put the code that pushes the data out to the APE server (which in turn pushes it out to the clients) is in the Model's after_create hook (since not all record creations will flow through the controller's create action). The final caveat is I want to push a piece of formatted HTML out to the APE server (rather than a JSON representation of the data). The reason I want to do this is 1) I already have logic to produce the desired layout in existing partials 2) I don't want to create a javascript implementation of the partials (javascript that takes a JSON object and creates all the HTML around it for presentation). This would quickly become a maintenance nightmare. The problem with this is it would require "rendering" partials from within the Model (which im having trouble doing anyhow because they don't seem to have access to Helpers when they're rendered in this manner). Anyhow - Just wondering what the right way to go about organizing all of this is. Thanks

    Read the article

  • HTML not appearing in Emails

    - by John
    My website sends out html emails but most of my recipients are receiving them as HTML marked up source pages instead of the nice table layouts. The problem doesn't appear to be an email client issue since the emails display properly in web mail clients like gmail, yahoo, hotmail etc... They also display properly when viewed through outlook or thunderbird that are connected to gmail, yahoo, hotmail etc... However, I have one domain name that I registered with a shared hosting provider called 1and1.com. I tried viewing my emails through their webmail client, thunderbird and outlook, but in all three cases, only the html mark up showed up. Also, I assume most of my recipients use MS Outlook with MS Exchange Server because they are business/finance people. Unfortunately, I don't know how to get an email that's managed by an MS Exchange Server. I made sure I'm sending my emails with the following headers: MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 Does anyone know what might be wrong? can anyone recommend a solution?

    Read the article

  • Should I invest in GraniteDS for Flex + Java development?

    - by Boden
    I'm new to Flex development, and RIAs in general. I've got a CRUD-style Java + Spring + Hibernate service on top of which I'm writing a Flex UI. Currently I'm using BlazeDS. This is an internal application running on a local network. It's become apparent to me that the way RIAs work is more similar to a desktop application than a web application in that we load up the entire model and work with it directly on the client (or at least the portion that we're interested in). This doesn't really jive well with BlazeDS because really it only supports remoting and not data management, thus it can become a lot of extra work to make sure that clients are in sync and to avoid reloading the model which can be large (especially since lazy loading is not possible). So it feels like what I'm left with is a situation where I have to treat my Flex application more like a regular old web application where I do a lot of fine grained loading of data. LiveCycle is too expensive. The free version of WebOrb for Java really only does remoting. Enter GraniteDS. As far as I can determine, it's the only free solution out there that has many of the data management features of LiveCycle. I've started to go through its documentation a bit and suddenly feel like it's yet another quagmire of framework that I'll have to learn just to get an application running. So my question(s) to the StackOverflow audience is: 1) do you recommend GraniteDS, especially if my current Java stack is Spring + Hibernate? 2) at what point do you feel like it starts to pay off? That is, at what level of application complexity do you feel that using GraniteDS really starts to make development that much better? In what ways?

    Read the article

  • How to "push" updates to individual cells in a (ASP.NET) web page table/grid?

    - by dommer
    I'm building something similar to a price comparison site. This will be developed in ASP.NET/WebForms/C#/.NET 3.5. The site will be used by the public, so I have no control over the client side - and the application isn't so central to their lives that they'll go out of their way to make it work. I want to have a table/grid that displays products in rows, vendors in columns, and prices in the cells. Price updates will be arriving (at the server) continuously, and I'd like to "push" any updates to the clients' browsers - ideally only updating what has changed. So, if Vendor A changes their price on Product B, I'd want to immediately update the relevant cell in all the browsers that are viewing this information. I don't want to use any browser plug-ins (e.g. Silverlight). Javascript is fine. What's the best approach to take? Presumably my options are: 1) have the client page continuously poll the server for updates, locate the correct cell and update it; or 2) have the server be able to send updates to all the open browser pages which are listening for these updates. The first one would seem the most plausible, but I don't want to constrain the assembled wisdom of the SO community. I'm happy to purchase any third party components (e.g. a grid) that might help with this. I already have the DevExpress grid/ajax components if they provide anything useful.

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >