Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 535/903 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • How do I enable a disabled Event Notification.

    - by Derick Mayberry
    I have a scenerio where I am using external notification to process documents being sent in from the entire navy fleet, normally I have no problems, but just a few days ago an administrator changed passwords and I my queue processing failed and I rolled back the transaction with this C# code: catch (Exception) { TransporterService.WriteEventToWindowsLog(AppName, "Rolling Back Transaction:", ERROR); broker.Tran.Rollback(); break; } after which my target queue would continue to fill up but nothing to the external activation queue. Does the Event Notification get disabled once a transaction is rolled back? Should I have done a broker.EndDialog here when catching my exception? Also, after my event notification is disabled(if that is actually whats happening) how do I re engage it? Do I have to drop it and recreate it? Thank in advance for any help, I love Service Broker and its workign wonderfully except for this bug that I hope to fix soon.

    Read the article

  • Can I make ssh tell me which control file it would use for multiplexing?

    - by Ryan Thompson
    I am using the following options in my ~/.ssh/config in order to enable connection multiplexing: ControlMaster auto ControlPath ~/.ssh/control/master-%r@%h:%p However, this has the annoying problem that the first shell to connect to a particular server must be the last to disconnect, because it is the master connection that all the other connections are using. So if you log out of the master, it appears to just hang. To solve this, I would like to wrap ssh with a script that checks if the control master file exists, and if not, starts a master ssh process in the background. Then it would start a slave ssh session. In order to accomplish this, my script would have to determine the path to the control file that ssh would use. This would entail parsing the ssh command line options and config files and implementing the logic for determining the ControlPath. Is there any way to just ask ssh what path it would use, so I can check it?

    Read the article

  • Configure postfix to filter email into hold queue

    - by Ian
    Hey, I would like postfix to send all emails received on SMTP off to an external process, which will decide whether to allow them through as normal, or whether to put them into the hold queue (or another quarantine area), where they have to wait for admin approval. I was thinking of doing this with an after-queue content filter, which uses pipe(8) to run a script on each message, and the script itself will spawn "postsuper -h " if it decides to put the message on hold. Then the admin can do postsuper -d or -r to delete or pass the message on as appropriate. So, my questions are - a) will this work, and b) is this the best way to do it? Would a milter or another type of content filter be a better approach?

    Read the article

  • Can we increase Torrent share ratio using Local Peer Discovery?

    - by Jagira
    I just want to know whether this is a flaw or not in Bittorrent system. Let us assume that I am member of a Private Torrent site which requires me to maintain a specific upload to download ratio. Will this work: I create a torrent of a large file say [ Fedora Linux ~ 4 GB ] and upload it to the tracker I download the same torrent using my ID and start it on another machine on LAN or a Virtual machine Both clients have Local Peer Discovery enabled, so they will find 'em [ not via DHT ] and start x'ferring data using LAN bandwidth at LAN speeds. Though both uploads and downloads will increase, my ratio will also increase If I reiterate the entire process 'n' times, the numerator in the "RATIO" i.e Upload will become so large that the effect of downloads on ratio will become less. I want to know whether this is legitimate???

    Read the article

  • Installed over 4G RAM on 32-bit OS? [closed]

    - by kai
    Possible Duplicate: 32-bit Windows Server address > 4GB RAM - How? I know that for 32-bit OS, the addressable memory space for each process is "4G" (maybe just 3G in user space...). If I have a 8G RAM, is it correct that all of the processes can still utilize (shared) these 8G memory but each of them are limited to a maximum 4G? Or the whole system only can see and utilize 4G out of 8G and thus having 8G RAM on a 32-bit OS is the same as having 4G RAM on it?

    Read the article

  • Creating a vm using Hyper-V causes the host of BSOD

    - by Arcass
    Hi, Problem description: When I try to create a virtual machine, the host bsod part way through the process. From the logs in lookes to fail/hang on the "Creating new VirtualDisckDriver with new VHD" step. The BSOD error code is SYSTEM_SERVICE_EXCEPTION : STOP:0x0000003B When the machine has finished restarting, it looks to have created the vhd and XML files for the vm but it isn't accessable. I have two server bothing behaving in exactly the same way, so I don't believe it's a hardware fault. Has anyone had a similar experince? How did you resolve the problem? NOTES Hardware: HP DL380 G6 BIOS : 2010.03.30 (14 Apr 2010) [Latest from HP website] Inter Hyperthreading: Disabled Intel Virtuazation Technology : Enabled No-Execute Memory Protection: Enabled Mem check reports no errors OS: Windows 2008 Sp2 x64bit fully updated regards Arcass

    Read the article

  • unable to access usb device.

    - by Tom
    Hi everyone, I'm reading my boot logs, at /var/log trying to understand why the boot process is taking so long. I found that the system can't access many usb devices, but can't understand why. Is there a way to stop Ubuntu from trying to access them? Here are the lines: /var/log# grep -r "usb_id" . ./boot.log:usb_id[716]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input7/mouse1' ./boot.log:usb_id[721]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input7/event7' ./boot.log:usb_id[725]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input7/event7' ./syslog:Jan 12 21:12:05 TomsterInc usb_id[955]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' ./syslog:Jan 12 21:12:05 TomsterInc usb_id[956]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/mouse3' ./syslog:Jan 12 21:12:05 TomsterInc usb_id[963]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' ./daemon.log:Jan 12 21:12:05 TomsterInc usb_id[955]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' ./daemon.log:Jan 12 21:12:05 TomsterInc usb_id[956]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/mouse3' ./daemon.log:Jan 12 21:12:05 TomsterInc usb_id[963]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' Any help will be greatly appreciated. Thanks in advance.

    Read the article

  • Can't double click files to open them in inDesign (CS5)

    - by Matt
    I cannot open a file unless I open inDesign (the program) and then do File-Open If I double click, it starts to open, then just hangs forever. AFTER I close it, and look in the directory where they're saved, I see a (temporary?) "lock" file. Now I can double click the original file and it opens just fine. However, now when I close iD it deletes the file and the whole process starts again... I have tried updating the software, uninstalled COMPLETELY and reinstalled, tried a brand new Win7 install. These files are all saved on a network drive, the computer is a new quad-core Dell with 12GB of RAM and a fresh x64 Win7 install on the SSD. Does not happen with other programs.

    Read the article

  • Pre game loading time vs. in game loading time

    - by Keeper
    I'm developing a game in which a random maze is included. There are some AI creatures, lurking the maze. And I want them to go in some path according to the mazes shape. Now there are two possibilities for me to implement that, the first way (which I used) is by calculating several wanted lurking paths once the maze is created. The second, is by calculating a path once needed to be calculated, when a creature starts lurking it. My main concern is loading times. If I calculate many paths at the creating of the maze, the pre loading time is a bit long, so I thought about calculating them when needed. At the moment the game is not 'heavy' so calculating paths in mid game is not noticeable, but I'm afraid it will once it will get more complicated. Any suggestions, comments, opinions, will be of help. Edit: As for now, let p be the number of pre-calculated paths, a creatures has the probability of 1/p to take a new path (which means a path calculation) instead of an existing one. A creature does not start its patrol until the path is fully calculated of course, so no need to worry about him getting killed in the process.

    Read the article

  • Meet up with the JCP at JavaOne Latin America

    - by Heather VanCura
    The JCP made it to JavaOne Brazil!  We had a quickie presentation earlier today on JCP.Next that was well attended.  Come to see us at@ the  OTN mini-theatre tomorrow from 12:00-12:15 pm for a quickie on participation.  Then make your way to the Mazanino Sala 12 at 12:30 pm for CON-22250.  "The Java Community Process: How You Can Make a Positive Difference" will be presented with Heather VanCura, JCP,  and Fabio Velloso, SouJava, on Thursday, 6 December, at 12:30 pm.  Find out more about how to participate in the JCP program, the JCP.Next effort and how to get involved with Adopt-a-JSR through your JUG (or on your own)!  Here is the description in Portuguese: A JCP desempenha um papel fundamental na evolução do Java. A sessão vai enfatizar o valor da transparência e participação através da JCP, Grupos de Usuários Java e do programa Adote um JSR. Vamos explorar também algumas das mudanças futuras no processo através da iniciativa JCP.Next, e explicar como você pode se envolver. Traga suas dúvidas, suas sugestões, e suas preocupações. Nós queremos ouvir de você, e incentivá-lo e facilitar a sua participação ativa no avanço da plataforma Java

    Read the article

  • SQL 2008 publisher -> SQL 2000 subscriber: Is a pull subscription possible for merge replication?

    - by Brian Dunzweiler
    I am trying to synchronize a SQL 2000 SP4 subscriber to a SQL 2008 publisher via a merge pull subscription. When the subscriber tries to run the merge agent, it fails the following error: The process could not connect to Distributor 'OH05DBS002\SAM_SSG_2008'. SQL Server does not exist or access denied. Has anyone had success with this setup? I was able to create and synchronize a push subscription so I know that communication works between the two, at least from 2008-2000. The lack of communication from 2000-2008 also affects the ability to create a linked server on the SQL 2000 subscriber. One other tidbit - I did install the SQL 2008 native client on the the 2000 box but it didn't help either. Before anyone asks, I can't upgrade the subscriber as it still needs to support replication between MS Access 2003. Yeah, I know. :) TIA, Brian

    Read the article

  • Bad archive mirror using PXE boot method

    - by user11566
    I'm trying to automatically install Ubuntu on a client PC by using the PXE BOOT method....my Objectives are below: I am following the steps given in this link installation using PXE BOOT the server will have a KICKSTART config file which contains the parameters for the OS installation and the files which are required for the OS installations. the client will have to detect this configuration along with the setup files and complete the installation without any input from the user. In my server I have installed DHCP3-server,Apache2 and TFTP to help me with the installation. I have nearly achieved my first objective, I am able to boot my client using the files stored in the server but during the installation stage it is asking me to CHOOSE A MIRROR OF UBUNTU ARCHIVE I gave the server's IP address and the path in the server where the files are located but then its giving me this error BAD ARCHIVE MIRROR So is it possible that instead of downloading all the files from the internet and storing them on my disk can I use the files which comes with the UBUNTU-CD, and how to store these files in what format (should I zip them) on the disk? secondly I am also generating the ks.cfg which I wanted to give to the client for automatic installation of the OS. So how should the configuration file be given to the installation process?

    Read the article

  • Can I run AD commands from a standard PowerShell script?

    - by Ben
    I am putting together a script to run post-sysprep. It should check if the machine is on the network, and if it is then it should query AD to see if a computer account exists with it's service tag (we're using these as the hostnames of the machines.) If it does exist, it should delete the account and rejoin the machine to the domain. I have got the majority of the script running, but need to run the following: Remove-ADComputer -Identity $distinguishedName How can I run this from the "standard" powershell environment? I don't want to use the AD module. (By the way - I'm on a mixed mode 2000/03 domain as we are in the process of upgrading to 2008) I'm new to PowerShell so be gentle if I'm completely missing the point! Thanks, Ben

    Read the article

  • Fun Upgrading to .Net 4.0

    - by Sam Abraham
    We are currently in the process of upgrading one of our applications to .Net 4.0. Aside from us geeks wanting to always use latest and greatest technologies, an immediate business need for Silverlight 4.0 features justified our upgrade endeavor. The following is a summary of some issues we ran into with our web project:   For security purposes, the IIS 7 .Net 4.0 ISAPI filter is disabled. “Allow” it from the ISAPI and CGI Restrictions screen as shown:   Figure 1 - Allowing ASP.Net 4.0 ISAPI Filter   By default the Web Setup Project only requires the .Net Framework 4 Client Profile to be installed on target system, which offers a lighter weight install for client machines consuming .Net 4.0 applications. However, using certain .Net 4.0 features requires the full .Net 4.0 Framework as outlined in this link: http://msdn.microsoft.com/en-us/library/cc656912.aspx. We hence needed to update the installer to require the complete .Net 4.0 Framework on the target machine and to prompt for its installation if needed.   To accomplish this goal, we updated the installer’s launch conditions to check for .Net 4.0 as well as the installer prerequisites as shown:     Figure 2- Ensure Web Setup Project runs on full .Net 4.0 version Figure 3 - Launch Conditions screen Figure 4 - Set launch condition to .Net 4.0. Figure 5 -Changing installer prerequisites Figure 6 -Changing installer prerequisites

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • Problem in Installing Wordpress

    - by Hajloo
    I try to install Wordpress in a Windows Client with WebPI which provided by Microsoft. I had tostop installation process 3 time and installing PHP and mysql Extention manually. but everytime I continue setup by WebPi andfinally it show me a success message. But when I try to see installed wordpress in my client I see this Your PHP installation appears to be missing the MySQL extension which is required by WordPress. I asked it in StackOverFlow here but I couln't get the right answer. I install everything in **C:\Program files\** so these are the location C:\Program Files\MySQL\MySQL Server 5.1 C:\Program Files\Php C:\Program Files\ext mysql root password: admin wordpress database : wordpress wordpress database password : 123 here is my php.ini

    Read the article

  • Volume size doesn't match Disk size after gparted expansion

    - by Cybersylum
    I just expanded a basic disk on a Windows XP VM from 15gb to 40 gb using GPARTED LiveCD (0.5.2-11). I didn't notice anything unusual during the expansion; but after I rebooted back into Windows, the disk capacity doesn't match the disk size as it should (only 1 volume on the disk). The disk shows as 40gb; but the C: volume still shows the original size. I've tried expanding the disk again with GPARTED (no change), and using VMware converter and have it adjust the size of the volume during the process (complains about a lack of space of snapshot error inside the os). The volume has 27% free space so I don't think it is a space issue. Chkdsk doesn't seem to find anything wrong either. The OS seems to run just fine, it doesn't see the additional space however. Any ideas?

    Read the article

  • information about /proc/pid/sched

    - by redeye
    Not sure this is the right place for this question, but here goes: I'm trying to make some sense of the /proc/pid/sched and /proc/pid/task/tid/sched files for a highly threaded server process, however I was not able to find a good explanation of how to interpret this file ( just a few bits here: http://knol.google.com/k/linux-performance-tuning-and-measurement# ) . I assume this entry in procfs is related to newer versions of the kernel that run with the CFS scheduler? CentOS distro running on a 2.6.24.7-149.el5rt kernel version with preempt rt patch. Any thoughts?

    Read the article

  • Vernon's book Implementing DDD and modeling of underlying concepts

    - by EdvRusj
    Following questions all refer to examples presented in Implementing DDD In article we can see from Figure 6 that both BankingAccount and PayeeAccount represent the same underlying concept of Banking Account BA 1. On page 64 author gives an example of a publishing organization, where the life-cycle of a book goes through several stages ( proposing a book, editorial process, translation of the book ... ) and at each of those stages this book has a different definition. Each stage of the book is defined in a different Bounded Context, but do all these different definitions still represent the same underlying concept of a Book, just like both BankingAccount and PayeeAccount represent the same underlying concept of a BA? 2. a) I understand why User shouldn't exist in Collaboration Context ( CC ), but instead should be defined within Identity and Access Context IAC ( page 65 ). But still, do User ( IAC ), Moderator ( CC ), Author ( CC ),Owner ( CC ) and Participant ( CC ) all represent different aspects of the same underlying concept? b) If yes, then this means that CC contains several model elements ( Moderator, Author, Owner and Participant ), each representing different aspect of the same underlying concept ( just like both BankingAccount and PayeeAccount represent the same underlying concept of a BA ). But isn't this considered a duplication of concepts ( Evan's book, page 339 ), since several model elements in CC represent the same underlying concept? c) If Moderator, Author ... don't represent the same underlying concept, then what underlying concept does each represent? 3. In an e-commerce system, the term Customer has multiple meanings ( page 49 ): When user is browsing the Catalog, Customer has different meaning than when user is placing an Order. But do these two different definitions of a Customer represent the same underlying concept, just like both BankingAccount and PayeeAccount represent the same underlying concept of a BA? thanks

    Read the article

  • Order in which passphrase is asked for encrypted volumes

    - by Lars Kotthoff
    I have installed 12.10 on a machine with two disks. The root partition is on one disk, the swap partition on the other. Both disks are encrypted and I have added the corresponding entries to /etc/crypttab. During boot, it asks for the passphrase for the disk with the root filesystem. Then it continues booting and gets to the login screen before I get a chance to enter the passphrase for the other disk. After logging in, I verified that it was actually waiting for me to enter the passphrase for that second partition (askpass process is running). But at that point, I have no way of entering the passphrase anymore. The manpage for crypttab suggests that the order in which the volumes are specified matters, so I changed it to have the swap disk first. I updated the initramfs and grub afterwards, but it didn't make any difference. How can I specify the order in which the encrypted partitions are unlocked? I'm looking for a solution that either asks for the swap passphrase first or tells the system to wait until all encrypted partitions are unlocked before displaying the login screen.

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Should I reformat XP with: Quick, regular, or "the current file-system"?

    - by Julie
    When reformatting, Windows XP ask me to choose from these formatting methods. (Implying that ALL of them are "formatting methods"... even #3) Reformat using NTFS (quick) Reformat using NTFS Leave the current file-system intact (no changes) What choice #3 really mean? Does it mean: A. Leave the current file-system (whatever file-system is already in use) and reformat to match that. (ie. If you current have NTFS, reformat to that again. If you currently have FAT32, reformat to that again. That is: Reformat without changing to a different file-system. Leave the current type.) or... B. Do absolutely nothing. Don't format. Don't delete any of my files. Abort the formatting process entirely.

    Read the article

  • Dealing with 2D pixel shaders and SpriteBatches in XNA 4.0 component-object game engine?

    - by DaveStance
    I've got a bit of experience with shaders in general, having implemented a couple, very simple, 3D fragment and vertex shaders in OpenGL/WebGL in the past. Currently, I'm working on a 2D game engine in XNA 4.0 and I'm struggling with the process of integrating per-object and full-scene shaders in my current architecture. I'm using a component-entity design, wherein my "Entities" are merely collections of components that are acted upon by discreet system managers (SpatialProvider, SceneProvider, etc). In the context of this question, my draw call looks something like this: SceneProvider::Draw(GameTime) calls... ComponentManager::Draw(GameTime, SpriteBatch) which calls (on each drawable component) DrawnComponent::Draw(GameTime, SpriteBatch) The SpriteBatch is set up, with the default SpriteBatch shader, in the SceneProvider class just before it tells the ComponentManager to start rendering the scene. From my understanding, if a component needs to use a special shader to draw itself, it must do the following when it's Draw(GameTime, SpriteBatch) method is invoked: public void Draw(GameTime gameTime, SpriteBatch spriteBatch) { spriteBatch.End(); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, EffectShader, ViewMatrix); // Draw things here that are shaded by the "EffectShader." spriteBatch.End(); spriteBatch.Begin(/* same settings that were set by SceneProvider to ensure the rest of the scene is rendered normally */); } My question is, having been told that numerous calls to SpriteBatch.Begin() and SpriteBatch.End() within a single frame was terrible for performance, is there a better way to do this? Is there a way to instruct the currently running SpriteBatch to simply change the Effect shader it is using for this particular draw call and then switch it back before the function ends?

    Read the article

  • HTTP Protocal

    I have worked with the HTTP protocal for about ten years now and I have found it to be incredibly usefull for transfering data espicaly for remote systems and regardless of the network enviroment. Prior to the existance of web services, developers use to use HTTP to screen scrap data off of web pages in order to interact with remote systems, and then process the data as they needed. I use to use the HTTPWebRequest and HTTPWebRespones classes in order to screen scrap data from various sites that had information I needed to use if no web service was avalible. This allowed me to call just about any webpage and grab all of the content on the page. Below is piece of a web spider that I build about 5-7 years ago. The spider uses the HTTP protocal to requst webpages and then parse the data that is returned.  At the time of writing the spider I wanted to create a searchable index of sites I frequented. // C# 2.0 Framework// Creating a request for a specfic webpageHttpWebRequest webreq = (HttpWebRequest)WebRequest.Create(_Url); // Storeing the response of the webrequestwebresp = (HttpWebResponse)webreq.GetResponse(); StreamReader loResponseStream = new StreamReader(webresp.GetResponseStream()); _Content = loResponseStream.ReadToEnd(); // Adjust the Encoding of Responsestring charset = "";EncodeString(ref _Content, ref charset);loResponseStream.Close(); //Parse Data from the Respone_Content = _Content.Replace("\n", "");_Head = GetTagByName("Head", _Content);_Title = GetTagByName("title", _Content);_Body = GetTagByName("body", _Content);

    Read the article

  • 2011 Tech Goal Review

    - by kerry
    A year ago I wrote a post listing my professional goals for 2011.  I thought I would review them and see how I did. Release an Android app to the marketplace – Didn’t do it.  In fact, haven’t really touched Android much since I wrote that.  I still have some ideas but am not sure if I will get around to it. Contribute free software to the community – I did do this.  I have been collaborating with others via github more lately. Regularly attend a user group meetings outside of Java – Did not do this.  Family life being what it is makes this not that much of a priority right now. Obtain the Oracle Certified Web Developer Certification – Did not do this.  This is not much of a priority to me any more. Learn scala – I am about 50/50 on this one.  I read a few scala books but did not write an actual application. Write an app using JSF – Did not do this.  Still interested. Present at a user group meeting – I did a Maven presentation at the Java user group. Use git more, and more effectively – Definitely did this.  Using it on a daily basis now. Overall, I got about halfway on my goals.  It’s not too bad since I did do a few things that weren’t on my list. Learned to develop applications using GWT and deploy them to Google App Engine Converted one of my sites from PHP to Ruby / Sinatra (learning to use it in the process) Studied up on the HTML 5 features and did a lot of Javascript development

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >