Search Results

Search found 2714 results on 109 pages for 'extremely frustrated'.

Page 76/109 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • Dual boot Windows 8 and Ubuntu 12.10 across a reboot

    - by AK4749
    My Setup: I have two separate SSDs, and each contains an independently bootable OS - W8 and U12.10. From my extremely limited knowledge, this means each has a functioning EFI partition(?). My default boot order (GA-Z68XP-UD3P mobo with UEFI firmware update) boots the UEFI partition containing windows first, but if I enter the BIOS I can select the "ubuntu" entry to successfully boot ubuntu. Both drives are GPT, and are EFI boots. What I want to do: Reboot Windows 8, re-enter W8 (this is happening now due to the default boot order). What I want to change, however, is to boot into Ubuntu if i reboot from ubuntu. Essentially, I would like to work within one OS unless I consciously choose otherwise. Normally, I would not even ask to something I thought was impossible, but... Why I think this is possible: When trying EasyBCD to add ubuntu to the W8 UEFI bootloader, I noticed an "iReboot" addon or something that allows you to select which OS to boot into from within the OS. Note that I ended up not using the NeoGrub entry to chain Ubuntu off the W8 bootloader because I couldn't get much help with it. Is this possible? Have I had too much coffee and gone insane? Thank you all for your time, AK

    Read the article

  • How can I duplicate HBCD's XP boot loader with my MBR?

    - by Warpstone
    I'm stumped. I'm migrating a Win XP Lenovo T500 to an SSD: I copied the XP partition using EaseUS to the SSD. Aligned the boot sector using Gparted The MBR needs to be rebuilt (fair enough) However, all attempts to use the Windows Recovery console hang (both via a boot CD and even when the console was installed as a boot option). I've tried using a bunch of tools to rebuild/replace the MBR, but no dice. They all say the MBR has been fixed, but I cannot load Windows from the SSD. The HBCD's boot from windows option works just fine however. I'm confused as to what HBCD can do that my drive can't. How can I get that functionality on my SSD? Is it a MBR fix I can mirror? The SSD is extremely fast when I do use HBCD to boot up... but it would be nice to not need a token-based access to the machine! :) Note: I know, windows 7 may be worth a fresh install, but I'm trying to avoid the cost and hassle if possible.

    Read the article

  • Embedded Windows XP

    - by Kyle
    My company acquired a brake press for bending large structural steel plates. We received it second hand and it came with an embedded copy of windows XP. Now for the part that's driving me nuts: plug and play has been turned off, also accessibility options has been disabled. What does this mean for me? Keyboards will not work! Nothing that plugs into a USB port will work and it does not have a CD ROM drive. I have tried to turn on plug and play using the on screen keyboard but it is not there since accessibility options is turned off. I would just get an updated copy of the the embedded OS but they come from Sweden and are extremely expensive. I assume there has got to be a way to get a USB devices to work. We need to get a wifi adapter on it so I can use team viewer and remotely configure it for our needs. Things to keep in mind: There is no keyboard so everything has to be done with the mouse There is no PS2 port for a keyboard just a mouse. Odd. I am 5 states away from this location and have been working with a tech who is physically installing the machine. System 32 seems to be missing A LOT of files, the tech told me there is only 8 folders in there and no other files (I don't even understand how Windows is running like this). If anyone has ANY ideas I would appreciate it, I am unsure where to go from here.

    Read the article

  • Can next hop address be same as destination address?

    - by Raj
    Like if host address is 100.0.0.1 and next hop address is 100.0.0.2 and destination ip address is also 100.0.0.2 Is this a valid use case? Any real life usage? <dest ip> <next hop> ip route 100.0.0.2 255.255.255.255 100.0.0.2 weight 1 next-hop-vrf GlobalRouter Above is the command on a router inside a VRF. 100.0.0.2 is pingable from host. 100.0.0.1 & 100.0.0.2 are an ip address assigned to a VLAN on host & destination respectively. On a linux box, Such configuration is valid. [root]# netstat -r -n Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 55.55.55.55 55.55.55.55 255.255.255.255 UGH 0 0 0 eth0 [root]# ip route show 55.55.55.55 via 55.55.55.55 dev eth0 As per my understanding, If a destination IP is reachable (i.e in the same subnet of host IP) we dont need a next hop. I came across one application for using next hop for destination IP in same subnet (i.e for VPN) See this: Will packets send to the same subnet go through routers? If next hop != destination IP but they are in same subnet as that of host, is a valid scenario for VPN, then i am wondering what are the applications of next_hop==dest_ip & subnet same as host? This is my first post in Super User. Extremely happy with the quick and warm response.

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • How can see what processes makes my server slow?

    - by Steven
    All my websites on my server are extremely slow or not loading at all. Even server admin (Plesk) will not load some times. There's been no changes to the sites for the last coupple of months. How can I see what processes is making my server slow? My environment looks like this: Server: VPS running Linux 2.8.x OS: Centos 5 Manage interface: Plesk 9.x Memmory: 1024MB CPU: 2.2GHz My websites run on PHP and MySQL. I finally managed to telnet (Putty + SSH) in to my server. Running top did not show any processes using more than max 2% CPU and none were using exesive memmory. I also got a friend to install a program that checks the core files, and all seemed fine. So I'm leaning towards network issues or some other server malfunction. But I'm not able to find out what can be wrong. Here are some answers to Sean Kimball: I don't run mail services on my server yet There are noe specific bandwidth peaks. Prefork looks like this <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> Not sure what you mean with DNS question. But I think it's up and running. There are no processes running wild Where can I find avarage load? Telnet is disabled and I have to log in using SSH :)

    Read the article

  • Multi-partition USB stick

    - by nightcracker
    In my freelance job as "the dude that fixes your computer" I have an extremely handy tool, a bootable USB stick with Ubuntu LiveCD that allows me to recover and investigate in a known, working environment. Now, I want to reformat this USB stick and reinstall with Casper-RW persistance. I did this a few times before with a FAT-formatted USB stick. It was a horror. The USB drive corrupted constantly, by people accidently removing the USB stick, the computer not properly shutting down, ETC. Now what I want to create a multi-partition USB stick so I can put Ubuntu on a ext partition, but still be able to store some Windows stuff in it, by having a secondary FAT partition. However I read somewhere that Windows will only check the first partition on USB sticks, giving a problem with the first bootable linux partition. Is this possible on some way? EDIT Perhaps it wasn't clear what the problem is. The problem is that I read somewhere that Windows will only recognize the first partition on a USB stick. But I want two partitions, a ext partition and a FAT partition. No issues so far, but in order to be bootable the ext partition must be the first one!

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • To Make Diversity Work, Managers Must Stop Ignoring Difference

    - by HCM-Oracle
    By Kate Pavao - Originally posted on Profit Executive coaches Jane Hyun and Audrey S. Lee noticed something during their leadership development coaching and consulting: Frustrated employees and overwhelmed managers. “We heard from voices saying, ‘I wish my manager understood me better’ or ‘I hope my manager would take the time to learn more about me and my background,’” remembers Hyun. “At the same token, the managers we were coaching had a hard time even knowing how to start these conversations.”  Hyun and Lee wrote Flex to address some of the fears managers have when it comes to leading diverse teams—such as being afraid of offending their employees by stumbling into sensitive territory—and also to provide a sure-footed strategy for becoming a more effective leader. Here, Hyun talks about what it takes to create innovate and productive teams in an increasingly diverse world, including the key characteristics successful managers share. Q: What does it mean to “flex”? Hyun: Flexing is the art of switching between leadership styles to work more effectively with people who are different from you. It’s not fundamentally changing who you are, but it’s understanding when you need to adapt your style in a situation so that you can accommodate people and make them feel more comfortable. It’s understanding the gap that might exist between you and others who are different, and then flexing across that gap to get the result that you're looking for. It’s up to all of us, not just managers, but also employees, to learn how to flex. When you hire new people to the organization, they're expected to adapt. The new people in the organization may need some guidance around how to best flex. They can certainly take the initiative, but if you can give them some direction around the important rules, and connect them with insiders who can help them figure out the most critical elements of the job, that will accelerate how quickly they can contribute to your organization. Q: Why is it important right now for managers to understand flexing? Hyun: The workplace is becoming increasingly younger, multicultural and female. The numbers bear it out. Millennials are entering the workforce and becoming a larger percentage of it, which is a global phenomenon. Thirty-six percent of the workforce is multicultural, and close to half is female. It makes sense to better understand the people who are increasingly a part of your workforce, and how to best lead them and manage them as well. Q: What do companies miss out on when managers don’t flex? Hyun: There are high costs for losing people or failing to engage them. The estimated costs of replacing an employee is about 150 percent of that person’s salary. There are studies showing that employee disengagement costs the U.S. something like $450 billion a year. But voice is the biggest thing you miss out on if you don’t flex. Whenever you want innovation or increased productivity from your people, you need to figure out how to unleash these things. The way you get there is to make sure that everybody’s voice is at the table. Q: What are some of the common misassumptions that managers make about the people on their teams? Hyun: One is what I call the Golden Rule mentality: We assume when we go to the workplace that people are going to think like us and operate like us. But sometimes when you work with people from a different culture or a different generation, they may have a different mindset about doing something, or a different approach to solving a problem, or a different way to manage some situation. When see something that’s different, we don't understand it, so we don't trust it. We have this hidden bias for people who are like us. That gets in the way of really looking at how we can tap our team members best potential by understanding how their difference may help them be effective in our workplace. We’re trained, especially in the workplace, to make assumptions quickly, so that you can make the best business decision. But with people, it’s better to remain curious. If you want to build stronger cross-cultural, cross-generational, cross-gender relationships, before you make a judgment, share what you observe with that team member, and connect with him or her in ways that are mutually adaptive, so that you can work together more effectively. Q: What are the common characteristics you see in leaders who are successful at flexing? Hyun: One is what I call “adaptive ability”—leaders who are able to understand that someone on their team is different from them, and willing to adapt his or her style to do that. Another one is “unconditional positive regard,” which is basically acceptance of others, even in their vulnerable moments. This attitude of grace is critical and essential to a healthy environment in developing people. If you think about when people enter the workforce, they're only 21 years old. It’s quite a formative time for them. They may not have a lot of management experience, or experience managing complex or even global projects. Creating the best possible condition for their development requires turning their mistakes into teachable moments, and giving them an opportunity to really learn. Finally, these leaders are not rigid or constrained in a single mode or style. They have this insatiable curiosity about other people. They don’t judge when they see behavior that doesn’t make sense, or is different from their own. For example, maybe someone on their team is a less aggressive than they are. The leader needs to remain curious and thinks, “Wow, I wonder how I can engage in a dialogue with this person to get their potential out in the open.”

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

  • Is there a simple context-menu add-in that could make-up for the Windows-7 status bar deficiency?

    - by DanO
    Edit: I initially asked about free disk space and selected item size. It has since been pointed out that the selected item size summary is still availiable natively in the details pane. I had read elsewhere (wikipedia) that this was removed along with disk free space, which is not the case. Only free disk space has been completely removed. Selection size is still availiable. Is there a context menu add-in out there that could show the free disk space of the relevant drive, when you right click? This would go a long way to compensating for one of the only steps backward I’ve discovered in Windows 7 so far. I doubt anyone had created one specifially for this need before windows 7 because this information was previously easily accessible in the status bar. I thought about creating one, but it has been a while since I have messed with the Shell API, and I know there are coders out there who could do it faster and better. If you’ve heard of one, or know of something else to make-up for this Microsoft misstep, I’d appreciate hearing about it. If MS were listing to the community they would already have a powertoy or add-in of some kind to un-break this. (they could release it unsupported even), as there seem to be many power users that are extremely annoyed by this feature removal decision. If anyone has seen something, please post it here. As it has been only 4 days since official Windows 7 release, I'll wait at least a week to chose an answer. Here's a picture of protoype screenshot: SU question 19232 is related.

    Read the article

  • Receiving and processing SMS messages through a script?

    - by ShankarG
    I am attempting to setup a system to receive and process SMS messages automatically. The system is intended for use in a context (an unfunded migrant workers' union in India) where both finances and sysadmin skills are extremely constrained (I would be the only person, in the near future, who would be administering the system). The intention is to make some functions - registration of members, generation of ID cards, communication of alerts and other information - easier. However, for receiving and sending SMS, I have not been able to find any email to SMS or other kind of gateway that functions in India. Perhaps there is one (edit: apparently Clickatell does have an India service, but the prices appear astronomical). If not, can one rely on a USB mobile modem (such as those provided by many mobile providers in India)? It seems like, with utilities such as gammu or bitpim, SMS operations on such a modem could be scripted. Is this actually feasible, though? Thanks in advance for your thoughts and suggestions. edit: Original first question removed since the two questions had little to do with each other. The original first question has been asked separately here

    Read the article

  • php mail() function painfully slow on local development machine

    - by Michael B
    Background: If you have set up a local apache server for development purposes you may have run into the problem where sendmail takes a long time (at least one minute) to send emails. This is extremely frustrating if you are trying to debug a problem with an email you have generated. There are several forum posts on the internet that discuss this problem. However, none of theme described what to do in enough detail for my limited knowledge. Here are the steps that worked for me: 1) find your hostname (in case you've forgotten it) using this command: :~$ cat /hosts/hostname myhostname 2) edit the file /etc/hosts and make sure the first line is the following: 127.0.0.1 localhost.localdomain localhost myhostname 3) edit the sendmail configuration file ( /etc/mail/sendmail.cf in Ubuntu) and Uncomment the line #O HostsFile=/etc/hosts 4) Restart the computer. The computer should boot up much faster now and the mail() function should return almost immediately. HOWEVER, the emails won't actually be sent unless you follow step 5. 5) You must new use the sendmail '-f' option whenever using the mail function. For example: mail('[email protected]', 'the subject', 'the message', null, '[email protected]'); My question for my fellow serverfaulters is: What further changes can be made so that I don't have to use the sendmail -f option? Although it's not very hard to add the -f option, it is a problem when your CMS (such as Drupal) does not use the -f option when sending mail. You would need to hack a core module to add this option.

    Read the article

  • Windows 7 - "Magic" frequent folder

    - by TheAdamGaskins
    Every week, I export an mp3 file from audacity into a folder with that day's date (e.g. this past sunday I exported the file to a folder named 20130609). Then I close everything and that's it for a while. Then, I come back a few hours later to upload the file to ftp. I usually have some folders open, so to open a new one, I right click on the folder icon on the taskbar... to open a new folder window and browse to this folder I just created, right? Well I look up a little bit and: So I click it and upload the file, and it actually saves me 30 seconds, which is really awesome... but what in the world? It happens every single week without fail. I create the folder inside the audacity export window. The folder stays on the frequent list until I create a new folder the following week. This was definitely not an advertised feature of Windows 7, and it's extremely handy... but it really just seems like magic to me. How does it work?

    Read the article

  • Formatting a former RAID 0 drive through USB

    - by EXC
    I'll try to be as specific as possible here: I was using two Hitachi 2.5" 500 gb HDDs in my Gateway P-7805u laptop in a RAID 0 configuration. The array was causing the laptop to run extremely hot, however, so I removed them and deleted the RAID array through Intel Matrix HDD manager. I did a clean install of Windows 7 on the original 320 gb HDD that came with the laptop. I never did format the original RAID array HDDs before taking them out of the computer. Now, I am attempting to format the Hitachi 500 gb RAID array HDDs externally through a USB external enclosure. The external HDD drivers install on my clean install OS, but when I go into 'My Computer' there is no external drive available. I cannot format in CMD Prompt because my computer will not designate a drive letter to the external HDD. The drivers install and the HDD is recognized as a Hitachi external drive, but nothing seems to show up in my computer window. I need to know if there is a way to format these drives to NTFS externally.

    Read the article

  • Performance of external USB disk with ESXi5

    - by PeterMmm
    I have a new HP DL120 G7 server with ESXi5. One VM is a Win2003 instalation and I have an external USB2.0 drive attached by USB Controller and USB Device. I copy a 4GB file from external USB to server disk. In the VM that takes up to 10 minutes. On a native Win2003 that takes aprox. 3 minutes. I have no explaination for that diference: In any case the bottleneck is the USB connection, much slower than the disks (SAS, RAID1). If the USB connection on the VM would be USB1.1 and not USB2.0 it would take much more time. (The disk performance between server partitions on the VM is correct. - see update) Could be that my native box is extremely fast and the VM is the normal case. ??? Update I try with passtrough and a first run copy the same data in aprox. 7 minutes. Still 2 times slower than the native connection. I also did another messure and the copy between partitions on the same VM takes 3 minutes.

    Read the article

  • moving from WinXP to WinServer in VmWare

    - by Alex
    I have a Vmware machine for.Net application testing. Current setup: Host OS: win7 Guest OS: Right now the guest OS is Win Xp Pro x64, which runs great with just 1 gigabyte of RAM and 10 gigs of disk space. * This part can be skipped * As I said, there was a program that I needed to test, but unfortunately, by default, Vmware installs crappy display drivers(called SVGA II) on XP machines and there is NO way to upgrade them! This resulted in my program's error (the program used SlimDX (DirectX wrapper) to do some stuff..). Eventually I found out that display drivers most certainly is the problem. For example, Windows 7 virtual machine uses SVGA 3D drivers and I have NO problems running my SlimDX-based program. Now, regarding Windows Server 2008! Apparently, WDDM driver is supported by WS2008, which means that I'll be able to install SVGA 3D and to test my DX apps. * end of skip * The questions are: Will WS2008 be as smooth with just 1 gig of RAM just like Win XP was? Will 10 gigs of HDD be enough? Or the server requires more? Will I be able to install .Net ver. 4 on WS2008? Are there any limitations that I need to be aware of as a .Net programmer? EDIT: I was hoping that WS2008 is XP-based, not Vista-vased/W7-based. In comparison, W7 virtual machine with 2 gigs of RAM and 2 proc cores nearly kills my Host OS. Whereas, WinXp runs extremely fast even with 1 core and 1 gig of RAM. That's the main reason why I want to try WS2008..

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • Visual Studio build fails: unable to copy exe-file from obj\debug to bin\debug

    - by Nailuj
    This is a question asked before, both here on Stack Overflow and other places, but none of the suggestions I've found this far has helped me, so I just have to try asking a new question. Scenario: I have a simple Windows Forms application (C#, .NET 4.0, Visual Studio 2010). It has a couple of base forms that most other forms inherit from, it uses Entity Framework (and POCO classes) for database access. Nothing fancy, no multi-threading or anything. Problem: All was fine for a while. Then, all out of the blue, Visual Studio failed to build when I was about to launch the application. I got the warning "Unable to delete file '...bin\Debug\[ProjectName].exe'. Access to the path '...bin\Debug\[ProjectName].exe' is denied." and the error "Unable to copy file 'obj\x86\Debug\[ProjectName].exe' to 'bin\Debug\[ProjectName].exe'. The process cannot access the file 'bin\Debug\[ProjectName].exe' because it is being used by another process." (I get both the warning and the error when running Rebuild, but only the error when running Build - don't think that is relevant?) I understand perfectly fine what the warning and error message says: Visual Studio is obviously trying to overwrite the exe-file while it the same time has a lock on it for some reason. However, this doesn't help me find a solution to the problem... The only thing I've found working is to shut down Visual Studio and start it again. Building and launching then works, untill I make a change in some of the forms, then I have the same problem again and have to restart... Quite frustrating! As I mentioned above, this seems to be a known problem, so there are lots of suggested solutions. I'll just list what I've already tried here, so people know what to skip: Creating a new clean solution and just copy the files from the old solution. Adding the following to the following to the project's pre-build event: if exist "$(TargetPath).locked" del "$(TargetPath).locked"   if not exist "$(TargetPath).locked" if exist "$(TargetPath)" move "$(TargetPath)" "$(TargetPath).locked" Adding the following to the project properties (.csproj file): <GenerateResourceNeverLockTypeAssembliestrue</GenerateResourceNeverLockTypeAssemblies However, none of them worked for me, so you can probably see why I'm starting to get a bit frustrated. I don't know where else to look, so I hope somebody has something to give me! Is this a bug in VS, and if so is there a patch? Or has I done something wrong, do I have a circular reference or similar, and if so how could I find out? Any suggestions are highly appreciated :)

    Read the article

  • Replacing GTileLayer in Google Maps v3, with ImageMapType, Tile bounding box?

    - by justdev
    I need to update this code: radar_layer.getTileUrl=function(tile,zoom) { var llp = new GPoint(tile.x*256,(tile.y+1)*256); var urp = new GPoint((tile.x+1)*256,tile.y*256); var ll = G_NORMAL_MAP.getProjection().fromPixelToLatLng(llp,zoom); var ur = G_NORMAL_MAP.getProjection().fromPixelToLatLng(urp,zoom); var dt = new Date(); var nowtime = dt.getTime(); var tileurl = "http://demo.remoteservice.com/cgi-bin/serve.cgi?"; tileurl+="bbox="+ll.lng()+","+ll.lat()+","+ur.lng()+","+ur.lat(); tileurl+="&width=256&height=256&reaspect=false&cachetime="+nowtime; return tileurl; }; I got as far as: var DemoLayer = new google.maps.ImageMapType({ getTileUrl: function(coord, zoom) { var llp = new google.maps.Point(coord.x*256,(coord.y+1)*256); var urp = new google.maps.Point((coord.x+1)*256,coord.y*256); var ll = googleMap.getProjection().fromPointToLatLng(llp); var ur = googleMap.getProjection().fromPointToLatLng(urp); var dt = new Date(); var nowtime = dt.getTime(); var tileurl = "http://demo.remoteservice.com/cgi-bin/serve.cgi?"; tileurl+="bbox="+ll.lng()+","+ll.lat()+","+ur.lng()+","+ur.lat(); tileurl+="&width=256&height=256&reaspect=false&cachetime="+nowtime; return tileurl; }, tileSize: new google.maps.Size(256, 256), opacity:1.0, isPng: true }); Specifically, I need help with this section: var llp = new google.maps.Point(coord.x*256,(coord.y+1)*256); var urp = new google.maps.Point((coord.x+1)*256,coord.y*256); var ll = googleMap.getProjection().fromPointToLatLng(llp); var ur = googleMap.getProjection().fromPointToLatLng(urp); The service wants the tile bounding box from what I understand. However, ll and ur do not seem to correct at all. I had it working and displaying the entire map bounding box in each tile, but of course that's not what I need. Any insight here would be greatly appreciated, not having the GTileLayers in V3 is fine if I can work around it, until then I'm frustrated.

    Read the article

  • Universal iPad App rejected because of launch crash that I can't reproduce

    - by Enrique R.
    Hello everyone, I'm very frustrated with this problem. After one week of waiting my universal iPad app has been rejected because "is crashing on launch on iPad running iPhone OS 3.2 and iPhone 3GS running iPhone OS 3.1.3 and Mac OS X 10.6.2." Unfortunately I can't replicate the problem, I've tested in debug and release modes and the app works just fine. I even created an ad-hoc configuration and test it in other devices and everything works fine. I should clarify that this is an update to a current iPhone application and I'm using the same distribution profile as the original iPhone app. Also, I checked everything before building the universal app following this entry: http://iphonedevelopment.blogspot.com/2010/04/converting-iphone-apps-to-universal.html Here are the crash logs that Apple sent me: Incident Identifier: 3E0D4A3B-2896-444D-BCBE-6C0CA1A66A90 CrashReporter Key: 18b5124ea5f657227c5f202a27ed707379b3e2e7 Process: Transfer [982] Path: /var/mobile/Applications/E9062465-7EA6-424C-9C61-D9DBCC7C915A/Transfer.app/Transfer Identifier: Transfer Version: ??? (???) Code Type: ARM (Native) Parent Process: launchd [1] Date/Time: 2010-05-04 15:35:57.399 -0700 OS Version: iPhone OS 3.1.3 (7E18) Report Version: 104 Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x00000001, 0x3eaa2188 Highlighted Thread: 0 Backtrace not available Unknown thread crashed with ARM Thread State: r0: 0x00002f90 r1: 0x00000000 r2: 0x385242d8 r3: 0x0000010d r4: 0x00000000 r5: 0x00000000 r6: 0x00000000 r7: 0x00000000 r8: 0x2ffffba0 r9: 0x2fffef90 r10: 0x00000000 r11: 0x00000000 ip: 0x0000000c sp: 0x2ffffba4 lr: 0x2fe08727 pc: 0x00002f94 cpsr: 0x40000010 Binary Images: 0x1000 - 0x25fff +Transfer armv7 /var/mobile/Applications/E9062465-7EA6-424C-9C61-D9DBCC7C915A/Transfer.app/Transfer 0x2fe00000 - 0x2fe24fff dyld armv7 /usr/lib/dyld .... And the one for the iPad: Incident Identifier: 3B170A28-C8E2-4018-8166-E69432A65070 CrashReporter Key: 4a0194e3f60559127faef2b014df605e4c47b981 Hardware Model: iPad1,1 Process: Transfer [533] Path: /var/mobile/Applications/400EE394-7BEE-45CA-942D-DBDC106360FF/Transfer.app/Transfer Identifier: Transfer Version: ??? (???) Code Type: ARM (Native) Parent Process: launchd [1] Date/Time: 2010-05-04 15:37:17.505 -0700 OS Version: iPhone OS 3.2 (7B367) Report Version: 104 Exception Type: 00000020 Exception Codes: 0x8badf00d Highlighted Thread: 0 Application Specific Information: com.erclab.iphone.photodownload failed to launch in time elapsed total CPU time (seconds): 1.150 (user 0.560, system 0.590), 6% CPU elapsed application CPU time (seconds): 0.150, 1% CPU Thread 0: 0 libobjc.A.dylib 0x33561996 0x33560000 + 6550 1 libobjc.A.dylib 0x33564986 0x33560000 + 18822 2 libobjc.A.dylib 0x33564cb2 0x33560000 + 19634 ... The app does not do anything other than loading a local HTML into a web view after the app it's launched so I don't understand why it says "failed to launch in time" Any help will be very much appreciated.

    Read the article

  • Cannot complete a JSONP call from jQuery to WCF

    - by Dusda
    Okay, I am trying (poorly) to successfully make a JSONP call from jQuery on a test page to a WCF web service running locally, as a cross-domain call. I have, at one point or another, either gotten a 1012 URI denied error, gotten a response but in Xml, or just had no response at all. Currently, the way I have it configured it spits back a 1012. I did not write this web service, so it is entirely possible that I am just missing a configuration setting somewhere, but I've become so frustrated with it that I think just asking on here will be more productive. Thanks guys. Details below. I have a WCF web service with the following method: [ScriptMethod(ResponseFormat = ResponseFormat.Json)] public decimal GetOrderStatusJson(int jobId) I am trying to call this method from a jQuery test page, via a cross-domain JSONP call. <script type="text/javascript"> getJsonAjaxObject( "http://localhost:3960/ProcessRequests.svc/json/GetOrderStatusJson", { "jobId": 232 }); function getJsonAjaxObject(webServiceUrl, jsonData) { var request = { type: "POST", contentType: "application/json; charset=utf-8", url: webServiceUrl, data: jsonData, dataType: "jsonp", success: function(msg) { //success! alert("blah"); }, error: function() { //oh nos alert("bad blah"); } }; $.ajax(request); } </script> Below are the chunks of the web.config I configure for this purpose: <services> <service behaviorConfiguration="MWProcessRequestWCF.ProcessRequestsBehavior" name="MWProcessRequestWCF.ProcessRequests"> <endpoint address="json" behaviorConfiguration="AspNetAjaxBehavior" binding="webHttpBinding" contract="MWProcessRequestWCF.IProcessRequests" /> <endpoint address="" binding="wsHttpBinding" contract="MWProcessRequestWCF.IProcessRequests"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MWProcessRequestWCF.ProcessRequestsBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> <endpointBehaviors> <behavior name="AspNetAjaxBehavior"> <enableWebScript/> </behavior> </endpointBehaviors> </behaviors>

    Read the article

  • SimpleXML SOAP response Namespace issues

    - by Stu
    Hi. After spending SEVERAL frustrated hours on this I am asking for your help. I am trying to get the content of particular nodes from a SOAP response. The response is $XmlStr = <?xml version="1.0" encoding="UTF-8"?><env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope"<xmlns:ns1="http://soap.xxxxxx.co.uk/"><env:Body><ns1:PlaceOrderResponse><xxxxxOrderNumber></xxxxxOrderNumber><ErrorArray><Error><ErrorCode>24</ErrorCode><ErrorText>The+client+order+number+3002254+is+already+in+use</ErrorText></Error><Error><ErrorCode>1</ErrorCode><ErrorText>Aborting</ErrorText></Error></ErrorArray></ns1:PlaceOrderResponse></env:Body></env:Envelope> I am trying to get at the nodes and children of <ErrorArray. Because of the XML containing namespaces $XmlArray = new SimpleXMLElement($XmlStr); foreach ($XmlArray->env:Envelope->env:Body->ns1:PlaceOrderResponse->ErrorArray->Error as $Error) { echo $Error->ErrorCode."<br />";<br /> } doesn't work. I have read a number of articles such as http://www.sitepoint.com/blogs/2005/10/20/simplexml-and-namespaces/ htp://blog.stuartherbert.com/php/2007/01/07/using-simplexml-to-parse-rss-feeds/ and about 20 questions on this site which unfortunately are not helping. (the second link has htp:// as a newbie here I cannot post more than one link) Even writing, $XmlArray = new SimpleXMLElement($XmlStr); echo "<br /><br /><pre>\n"; print_r($XmlArray); echo "<pre><br /><br />\n"; gives SimpleXMLElement Object ( ) which makes me wonder if the soap response ($XmlStr) is actually a valid input for SimpleXMLElement. It seems that the line $XmlArray = new SimpleXMLElement($XmlStr); is not doing what I expect it to. Any help on how to get the nodes from the XML above would be very welcome. Obviously getting it to work (having a working example) is what I need in the short term, but if someone could help me understand what I am doing wrong would be better in the long term. Cheers. Stu

    Read the article

  • merging UIImagePickerController image with cameraOverlayView

    - by GameDev
    Im really in the need for some help and advice. Spent the last week on this and have now just become frustrated as i cant get it to work! Basically, im trying to merge two images into one image to display/save. First the user picks an image from album, it goes to edit image screen where user can move and scale the image. On this screen is an overlay image (320x480) for the person to align there eyes in. Once aligned I want to save this image (edited and overlay) into one and pass the image onto my next screen. It works fine when the image is filling the edit/crop box, but when the image is widescreen with top and bottom not filling the box, then when i save the image the coords of the overlay dont get saved correctly! Heres my code, ive tried various ways of doing this but have failed at every attempt :( - (void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { // Access the cropped image from info dictionary UIImage *image = [info objectForKey:@"UIImagePickerControllerEditedImage"]; // Combine image with overlay before saving!! image = [self addOverlayToImage:image]; overlayGraphicView.image = nil; // Take the picture image to the post picture view controller postPictureView = [[PostPictureViewController alloc] init:image Company:companyName withLink:buyButtonLink]; [picker pushViewController:postPictureView animated:YES]; [picker release],picker = nil; } The problem is that the image picked (originalImage) could be of any height, my overlayImage is however always 320x480, its almost all transparent with just two eye images in center which i want to save over the original images eyes! - (UIImage*) addOverlayToImage:(UIImage*)originalImage { CGRect cgRect =[[UIScreen mainScreen] bounds]; CGSize size = cgRect.size; UIGraphicsBeginImageContext(size); [originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)]; UIImage* overlayImage = [UIImage imageNamed:overlayGraphicName]; [(UIImage *)overlayImage drawInRect:CGRectMake(0, 0, size.width, size.height)]; UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext(); [finalImage retain]; UIGraphicsEndImageContext(); return finalImage; } I wish there was just an easy way to take a screenshot of whatever is in the edit crop box :( Please if someone could help me with this ASAP as I need to finish this in 1-2 days time! Thank you. EDIT:- I should also mention that with this I get the correct center of the screen and placement of the overlay on my next screen: [(UIImage *)overlayImage drawInRect:CGRectMake(0, 0, size.width, size.height)]; However, I am unable to work out the correct position of the main image especially as the height is different for every image if not fullscreen! I tried this to center it into the correct position but it doesnt work: [originalImage drawInRect:CGRectMake(0,(size.height/2 - originalImage.size.height/2), originalImage.size.width, originalImage.size.height)];

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >