Search Results

Search found 14531 results on 582 pages for 'financial services'.

Page 90/582 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Openfire on EC2 with Jingle

    - by Bjorn Roche
    I would like to run Openfire (or another XMPP server) on EC2. At the moment this is just for testing, so easy setup and configuration are important, as is low cost. At some point, however, if things go well, it will be important to scale this. Ideally, it would be nice to not have to switch software when the scaling happens, but if a switch needs to happen later it certainly can. My requirements are: basic XMPP services, including muc and pubsub. Logins controlled from an external API. Preferably, when a user attempts to connect, the XMPP server checks with the api to see if their username and password are correct, but I can also have the API keep the XMPP server up to date on new users, deleted users, pasword changes and so on. I see Openfire has a "user service" API. Not ideal, but it looks workable. Jingle, including relay and STUN. It's not at all clear to me if the Jingle Nodes plugin takes care of this. I'm a bit confused about what's required to set this up, and I'd rather know in advance than be confused along the way :). eg It seems like STUN servers require more than one IP address. Can Openfire do all this for me, including stun and media relay on a single machine? Is this hard to configure on EC2 with Openfire? What are the basic steps? Would this be easier with something else like, say Tigase? What about database? Should I use amazon's database service, or run a db on the same machine? Would the server be compatible with a service like http://www.siteuptime.com/ Thanks!

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • Is there a way to determine which service (in svchost.exe) does an outgoing connection?

    - by fluxtendu
    I'm redoing my firewall configuration with more restrictive policies and I would like to determine the provenance (and/or destination) of some outgoing connections. I have an issue because they come from svchost.exe and go to web content/application delivery providers - or similar: 5 IP in range: 82.96.58.0 - 82.96.58.255 --> Akamai Technologies akamaitechnologies.com 3 IP in range: 93.150.110.0 - 93.158.111.255 --> Akamai Technologies akamaitechnologies.com 2 IP in range: 87.248.194.0 - 87.248.223.255 --> LLNW Europe 2 llnw.net 205.234.175.175 --> CacheNetworks, Inc. cachefly.net 188.121.36.239 --> Go Daddy Netherlands B.V. secureserver.net So is it possible to know which service does a particular connection? Or what's your recommendation about the rules applied to these ones? (Comodo Firewall & Windows 7) Update: netstat -ano & tasklist /svc help me a little but they are many services in one svchost.exe so it's still an issue. moreover the service names returned by "tasklist /svc" are not easy readable. (All the connections are HTTP (port 80) but I don't think it's relevant)

    Read the article

  • Recommendations for colocation in the US

    - by Emil
    Hello serverfault I work for a European media company and we are currently looking for colocation in the US. I know the European market quite well unfortunately that is not the case for the US. I'm hoping for you guys to help me out a bit with a few questions, it would be much appreciated! I am looking for a data center that can deliver a high level of availability (tier 3 or better). The installation will be fairly large so capacity is important. Good internet connectivity/carrier presence. However most important is good customer support, skilled dedicated and responsive technical staff, since we won't have tech staff close by. I'm looking for a small and fast moving company that target internet businesses rather than big old enterprise hosting. What locations should we go for given that we want to reach all of the US from a single site and still maintain decent latency? (do we need east and west coast?) Where are the main internet hubs and should you try and get as close as possible? Are there any good online resources I should look at? Where do the large scale internet/media services colocate? Lastly I would be very happy to get some actual recommendations for companies to talk to P.S I'm happy to return the favor if anyone has question regarding data centers and colocation in Europe.

    Read the article

  • Migrating from Exchange 2003 to 2010 UID changes from 32 characters to 64 characters

    - by Seth
    We have built a custom CRM tool that integrates with Exchange 2010 using Exchange Web Services. The issue we are encountering revolves around editing appointments through the CRM tool that were created in exchange 2003. We have migrated the sales staff from Exchange 2003 to 2010 so that we could use EWS. EWS works great except for appointments that were created prior to the migration. Those appointments created prior to the migration in Exchange 2003 cannot be modified using EWS. The reason is that the ExchangeItemUID for the appointment changed from 32 characters to 64 characters. EWS does not recognize ExchangeItemUIDs that are 32 characters. We are looking for a solution that will allow us to modify these appointments. We are open to ideas of running a script that will update all appointment events for the sales people so that 2003 appointments are converted to 2010 format. We are also open to alternate IDs as opposed to using UID. I have seen some references to using CleanGlobalObjectID, but I don't see that property in EWS. Has anyone encountered this problem before? Any help you could give would be greatly appreciated!

    Read the article

  • Problem with network after malware attack

    - by Cruelio
    Im trying to help some friends with a Win XP machine. I got rid of the malware using Malware Bytes, and HiJackThis. But now they(I) have another problem. When the computer boot into Windows it seems fine. When I start Internet Explorer the browser window opens just fine, but nothing happens for at minute or two. After the two minutes of waiting, the network icon appears in the taskbar next to the clock, and then everything works. The computer is connected to the internet using a Ethernet adapter. I have looked at the Rvent Log and found an error from Perfnet with eventid 2004 <Provider Name="PerfNet" /> <EventID Qualifiers="49152">2004</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> What I have tried so far: In the device manager i have uninstalled the Ethernet adapter and installed it again. I have uninstalled and installed the Windows File and Printer Sharing service. I have verified that both server and workstation services are started. What should I do next?

    Read the article

  • Windows: How to make programs think they're not running in a terminal server session?

    - by sinni800
    I am using the program "SoftXPand 2011 Duo" by Miniframe on my Windows 7 PC. It makes two workstations out of one computer. It uses the terminal services built into Windows to create the additional session. I use two screens, two keyboards and two mice to create this "illusion" of two computers. It works quite well and I can even play two different 3D games on the two screens attached to this single machine (using a Radeon HD5770 and a Core i5 2500k with 8 Gbytes RAM). There are a few downsides to this. I just found about one that is hidden on the first look. The sessions you are in (even on the first workstation) will identify as a terminal server session! Now some programs will run with limited effects (graphical), and some won't run at all. This also resulted in some games not running at all. They just say "Cannot be run in a terminal server session" and exit. I have already proven that top modern games (DirectX 10, 11) run just as good as on the same machine without SoftXPand, so this is a pretty artificial limitation! So, can I somehow hack my current session so it doesn't look like a terminal server session anymore? I. E. #include <windows.h> #pragma comment(lib, "user32.lib") BOOL IsRemoteSession(void) { return GetSystemMetrics( SM_REMOTESESSION ); } Will return FALSE? (Not a programming question! Just an example how programs detect if they're in a terminal server session!)

    Read the article

  • set service dependency on internet connection

    - by nccsbim071
    Hi, I have created a window service and set some dependencies like on MSMQ, MSSQLSERVER and so. Everything is working nice. but i need to send another dependency for my service. That is on internet connection. My service is responsible for sending emails. As soon my server starts, my service starts too and it finds if there is anything to send, if there is, it starts to send email, if during sending it is not able to connect to the internet it cannot send email. so i guess i should set my service dependency on internet connection too. I already set my window service dependency to MicrosoftSQL Server and Microsoft Message Queuing by editing the registry value. by adding new multi string value named "DependOnService", Type "REG_MULTI_SZ" and space separated names of the services that my service depends upon for the Data. For Microsoft SQL Server i set the value to "MSSQLSERVER" but i don't know the name of the internet service that i need to set dependency upon. how can i do this, any help please Thanks

    Read the article

  • Remote connection to a Windows 2008 Server Web edition

    - by Lorenzo
    Hello I have just installed Windows 2008 web server to have a development/test site on my office. In the test network I only have 2 machines: Windows server 2008 Web Edition Vista x64 client machine with Visual Studio The client and the server are networked using a NETGEAR router. I have enabled Remote desktop on the server and when I try to connect to it within the Vista client I get the credential window as in the following screenshot. But even if I write the correct credentials I am not able to remote login on the server. Where am I doing wrong? Update 1 I have even tried to create a folder share on the server. But I am not able to access it for the same reason. User or password invalid it says. But this is impossible as I am logging in the server with the same credentials. Update 2 If I try to browse the network from the RDP client I receive a message saying that there are no server running Terminal Services in my network.... :O

    Read the article

  • Software to monitor bill payment to mission critical IT service providers (ISP, DNS etc.)

    - by Sholom
    Hi All, The Problem: Our very likable but absent minded bookkeeper keeps neglecting to pay our IT vendors on time. Just this past week our internet service was disconnected. Same could happen to many other mission critical accounts (domain registrar, backup MX, anti-virus license, HackerSafe (McAfee secure) service and even an 800 number to name a few). As the sysadmin, i monitor my severs to make sure they are plugged into the power-outlet. I believe i should also monitor my services to make sure they are plugged in to their money-outlet. To compound the problem, when the power goes out someone else will likely notice and notify me. But if a bill is not payed, no one will ever notice until service is lost. Lost as in losing our domain name which would cause a lot more damage then the power failing on our server. [Solution] = [Doesn't work because]: Retrain the bookkeeper = Wishful thinking. Notify my manager = Already have (via email). Protects me, does not solve problem. Fire bookkeeper = What makes you so sure the next one will never forget? Bottom line: Humans are humans and sooner or later something critical will be royally messed up. We need to partner with a machine to help us out here. Anybody have the same problem? What software/solution do you use? I would like software that emails me when a bill is passed due just like i get an email when the power outlet fails. Anyone hear of anything like that? Thanks

    Read the article

  • AWS own email domain and some generic questions

    - by John Brunner
    I'm getting started with Amazon Web Services and I have a few question I'm not sure about. As every (company) webpage I want to use an "[email protected]" email adress, but how is that done? I looked up at godaddy.com (for domain registration), the offer me an email adress like I want, but for 3 dollars per month. Is this possible with AWS? Because at AWS you have just a complex domain which is not very userfriendly or serious. Also I want to host my dynamic webpage on the amazon cloud, but I'm not sure if I'm doing that right. I've read many guides, and all I know is that I have to purchase a Elastic Compute Cloud, and a Simple Storage Service... and every guide is working with the basic linux package, why not Windows? Is it more expensive? I just want to host a mySQL Server for the dynamic webpage, which is reached over a normal domain. And one last question, if I sign up for an AWS account it asks me for an email account. But I found it a little bit unserious to write there my free-webmailer-adress... How is it done the normal way? Thanks in advance! Best regards, john.

    Read the article

  • Map a URL bought with Dreamhost to Amazon EC2 (AWS)

    - by Edan Maor
    I have several URLs I purchased through Dreamhost. I'm starting to use Amazon's AWS, and I'd like to map the URLs to Amazon. This is something of a silly question, and I've already done the same thing several times to other services (mapping from Dreamhost to WebFaction). But for some reason when I tried to find the proper way to do the same mapping to Amazon, I find a lot of detailed writing talking about whether I should be using CNAME or A records, etc. So I wanted to ask in the simplest possible terms and hopefully get a simple, concrete answer: I bought a URL from Dreamhost, I have an EC2 server running on AWS (to which I already mapped an Elastic IP address). How do I make the URL map to AWS? And if there are several options, which one should I effectively be using? P.S. Meta-question - why are things so much more difficult with AWS? When I search Google for "Move from Dreamhost to WebFaction, I get very simple answers on how to do the mapping. In what way is AWS different?

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • what is the reason behind window service stopped ,whether its due to LAN problems or any other issues

    - by Steve
    I have a windowservice which named Trunk which stopped one day i just want to know the reason behind it? this is an entry in the logs, Nov 15 17:54:04.318 :Trunk-1516:Trunk:handle_control_event:Received CTRL_LOGOFF_EVENT, ignore it Nov 25 15:54:52.157 :Trunk-1516:Trunk:ERROR - Process Restart Count (5) Exceeded for:C:\Program Files\secon\11.1.4\bin\vmd Nov 25 15:54:52.157 :Trunk-1516:Trunk:Stopping Trunk ... Nov 25 15:54:52.314 :Trunk-1516:Trunk:Shutting down, signaled C:\Program F Nov 20 15:54:20.345 :SCBridge.RegisterBridge:Exception in method: ScUtility.ScCommandException (0xa08990002): Exception from HRESULT: 0xa08990002 Supplemental Information: None available. at ScServer.ScServiceProcessorRegistryManager.Attach(String serviceProcessor, ScClientInformation clientInfo, FORCE_ATTACH_SPEC forceAttachToMaster) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor, Object clientInfo) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor) at ServerControlInterface.SCBridge.RegisterBridge(String SPName) for system APOLLOSP0 attempting to attach and register with the Bridge i had seen service is registered with specific account, so i thought that user logged off from the machine that may be the reason behind it or any LAN disconnection problem . But Having taken another look at the above entry we seem to have a constant failure being generated in vmd which causes Trunk to detect vmd requires a restart. Most of the time it works OK and the restart count is anything up to 4. In this case the Trunk log confirms that the Restart Count is 5 and so is considered to be exceeded. Presumably, this triggers the termination of the other services and Trunk is actually doing its job.So, coould this just be a timing issue and we need to increase the tolerance level (i.e restart count) or do we need to address the 0xa08990002 error in vmd?

    Read the article

  • Windows service fails to start with local user until password is entered again in logon tab

    - by Nick
    Basically we have a service where we use a local account as its logon. it has all the proper permissions, and everything is working fine, service starts and runs and all is good. Then one day, after rebooting, the service fails to start. Logs show incorrect password. Our technicians resolve the issue by simply retyping the password into the "Log On" tab from the services.msc. Unfortunately we have not been able to root cause. I suspect that the password that is stored for the service is lost somehow. Does anyone know where the password hash might be stored so we can check it? The only activities that seem to be possibly related are patching with Microsoft security patches, but we have multiple servers running the same service, and we have never seen more than one at a time, and its usually a different one each time when this occurrs. I believe this to be the same issue as this: Windows service fails to start with custom user until started once with local user But i was unable to add comments, and its really old.

    Read the article

  • Get the Windows Scheduler's Location Or Code?

    - by Ram
    Today i got a task to find the scheduler that is running once in a month, actually I have to change its running time period to once a week. I have searched for it in the Windows scheduled tasks, but i didn't find it. It is sending a mail containing a link. Now i am not confused about this where else can i find it. As the place i know where the scheduler can be, has already been checked by me. Can someone suggest me where else can i search for this scheduler? As per the previous developer's comment "It is a normal Windows method of scheduling tasks" UPDATE Actually the task is running after a month periodically. As per my knowledge it could be a "Windows Task Schedule" or "Windows Service" created by the old developer. Now as the previous developer is not available and i do not have any documentation.. i need to change the time period from month to weakly. I have checked in the "Task Schedules" on the server and checked the services running ob the server and was unable to find the "Scheduler". now i have two questions: is there any other approach by using that i can schedule an automated email periodically. any idea to find this.

    Read the article

  • What is the correct approach i should use for an application that requires amazon S3 uploads and SimpleDB data management?

    - by Luis Oscar
    I am developing an application for iOS and that is going smoothly, the problem is that I am very new at server sided things. I am totally confused about how to correctly use Amazon Web Services for this purpose. What I want to do is very simple. I want my application to be able to query a servlet hosted in EC2 to be able to retrieve pictures and data based on some criteria from S3 and SImpleDB respectively. Also the application should be able to upload pictures into a S3 bucket and register the information in the SImpleDB. My main concerns are security and costs, So far i was using Amazon Token Vending Machine but I haven't been successful when trying to customize it, and while researching I discovered that on the long run it is very expensive. The ultimate goal is to handle a "social" picture service for my iOS application. Being able to register new users, authenticate these users. See what permissions they have to which pictures from the bucked. And all this without having to worry about Third party people from accessing the private pictures of my users. Sorry for this question but I am really clueless about how to handle this... I have tried reading many articles but all these server stuff looks very scary.

    Read the article

  • How to add a new item in a sharepoint list using web services in C sharp

    - by Frank
    Hi, I'm trying to add a new item to a sharepoint list from a winform application in c# using web services. As only result, I'm getting the useless exception "Exception of type 'Microsoft.SharePoint.SoapServer.SoapServerException' was thrown." I have a web reference named WebSrvRef to http://server/site/subsite/_vti_bin/Lists.asmx And this code: XmlDocument xmlDoc; XmlElement elBatch; XmlNode ndReturn; string[] sValues; string sListGUID; string sViewGUID; if (lstResults.Items.Count < 1) { MessageBox.Show("Unable to Add To SharePoint\n" + "No test file processed. The list is blank.", "Add To SharePoint", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); return; } WebSrvRef.Lists listService = new WebSrvRef.Lists(); sViewGUID = "{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}"; // Test List View GUID sListGUID = "{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}"; // Test List GUID listService.Credentials= System.Net.CredentialCache.DefaultCredentials; frmAddToSharePoint dlgAddSharePoint = new frmAddToSharePoint(); if (dlgAddSharePoint.ShowDialog() == DialogResult.Cancel) { dlgAddSharePoint.Dispose(); listService.Dispose(); return; } sValues = dlgAddSharePoint.Tag.ToString().Split('~'); dlgAddSharePoint.Dispose(); string strBatch = "<Method ID='1' Cmd='New'>" + "<Field Name='Client#'>" + sValues[0] + "</Field>" + "<Field Name='Company'>" + sValues[1] + "</Field>" + "<Field Name='Contact Name'>" + sValues[2] + "</Field>" + "<Field Name='Phone Number'>" + sValues[3] + "</Field>" + "<Field Name='Brand'>" + sValues[4] + "</Field>" + "<Field Name='Model'>" + sValues[5] + "</Field>" + "<Field Name='DPI'>" + sValues[6] + "</Field>" + "<Field Name='Color'>" + sValues[7] + "</Field>" + "<Field Name='Compression'>" + sValues[8] + "</Field>" + "<Field Name='Value % 1'>" + (((float)lstResults.Groups["Value 1"].Tag)*100).ToString("##0.00") + "</Field>" + "<Field Name='Value % 2'>" + (((float)lstResults.Groups["Value 2"].Tag)*100).ToString("##0.00") + "</Field>" + "<Field Name='Value % 3'>" + (((float)lstResults.Groups["Value 3"].Tag)*100).ToString("##0.00") + "</Field>" + "<Field Name='Value % 4'>" + (((float)lstResults.Groups["Value 4"].Tag)*100).ToString("##0.00") + "</Field>" + "<Field Name='Value % 5'>" + (((float)lstResults.Groups["Value 5"].Tag)*100).ToString("##0.00") + "</Field>" + "<Field Name='Comments'></Field>" + "<Field Name='Overall'>" + (fTotalScore*100).ToString("##0.00") + "</Field>" + "<Field Name='Average'>" + (fTotalAvg * 100).ToString("##0.00") + "</Field>" + "<Field Name='Transfered'>" + sValues[9] + "</Field>" + "<Field Name='Notes'>" + sValues[10] + "</Field>" + "<Field Name='Resolved'>" + sValues[11] + "</Field>" + "</Method>"; try { xmlDoc = new System.Xml.XmlDocument(); elBatch = xmlDoc.CreateElement("Batch"); elBatch.SetAttribute("OnError", "Continue"); elBatch.SetAttribute("ListVersion", "1"); elBatch.SetAttribute("ViewName", sViewGUID); strBatch = strBatch.Replace("&", "&amp;"); elBatch.InnerXml = strBatch; ndReturn = listService.UpdateListItems(sListGUID, elBatch); MessageBox.Show(ndReturn.OuterXml); listService.Dispose(); } catch(Exception Ex) { MessageBox.Show(Ex.Message + "\n\nSource\n" + Ex.Source + "\n\nTargetSite\n" + Ex.TargetSite + "\n\nStackTrace\n" + Ex.StackTrace, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); listService.Dispose(); } What am I doing wrong? What am I missing? Please help!! Frank

    Read the article

  • I/O intensive MySql server on Amazon AWS

    - by rhossi
    We recently moved from a traditional Data Center to cloud computing on AWS. We are developing a product in partnership with another company, and we need to create a database server for the product we'll release. I have been using Amazon Web Services for the past 3 years, but this is the first time I received a spec with this very specific hardware configuration. I know there are trade-offs and that real hardware will always be faster than virtual machines, and knowing that fact forehand, what would you recommend? 1) Amazon EC2? 2) Amazon RDS? 3) Something else? 4) Forget it baby, stick to the real hardware Here is the hardware requirements This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 1 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 2 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. Thanks in advance. Best,

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • What are the best open-source software non-profits for making financial contributions and/or facilitating useful work?

    - by Jason S
    I'm not a great programmer myself (my main job is more electrical engineering) and have never really helped out with any open source projects, but I've benefited greatly from free and/or open-source software (MySQL, OpenOffice, Firefox, Apache, PHP, Java, etc.) and at some point would like to make some modest financial contributions to help keep this stuff going. I'm wondering, what are the best non-profits to make financial contributions? I'm aware of: Open Source Initiative (founded 10 years ago by several prominent figures including programmer and "The Cathedral and the Bazaar" author Eric S. Raymond) Free Software Foundation Mozilla Foundation Apache Foundation Anyone have a particular favorite? Ideally I'd like to give money to a non-profit that would foster some of the smaller but promising open-source and/or free software projects. The big projects like Firefox and Apache are already well-established. There are a few small individual shareware programs I've already paid for directly. But it's those middle-ground projects that I would really like my contributions to support. (one that comes to mind is a good GUI for Subversion or Mercurial.) It's one thing for a single person to donate a little $$ to a small project. It's another for a foundation or something to give larger grants to projects that give a good bang for the buck. Conservation organizations like The Nature Conservancy, or the Trust for Public Lands, have really honed this approach, but I'm not really sure if there's an equivalent model in software-land.

    Read the article

  • New Release of Oracle EPM (Enterprise Performance Management)

    - by Theresa Hickman
    I'm a huge fan of Hyperion products and consider Hyperion to be one of the best acquisitions Oracle has made in terms of applications. So I am really excited to talk about their latest release, Release 11.1.2 of the Oracle EPM System. This is EPM's largest release in 2 years, and it's jam-packed with new modules and features. In terms of brand new products, there are three: 1. Public Sector Planning and Budgeting meets the needs of public sector agencies, higher education, governments, etc. that have complex budget requirements. It supports position or employee-based budgeting and integrates with MS Office and your ERP ledgers to perform commitment control. 2. Hyperion Financial Close Management is a complete financial close solution that orchestrates the entire close process from subledgers and general ledger to financial reporting and disclosure submissions. And of course, it is integrated with GL systems and consolidation systems. I saw a demo of this and it looked pretty slick. They have this unified close calendar that looks like a regular calendar that gives each person participating in the close process a task list. It comes with a Gantt chart that shows the relationships and dependencies among closing tasks. There are dashboards to allow you to track the close progress and completion of tasks as well as perform trend analysis and see how much time is being spent on different activities in the close process. This gives you visibility that you never had before to understand where the bottlenecks are and where improvements could be made. I think what I liked best about this product was that it provides a central place for all participants to communicate their progress. When I worked as an Accountant, we used ad hoc tools, such as spreadsheets, Word documents, emails, and phone calls during the close process. I like the idea of having a central system to track the overall progress as well as automate the entire financial close process. Who knows, maybe Accountants won't have to revolve their lives around the month end close anymore with a tool like this. Those periodic fire drills can become predictable, well managed processes. 3. Disclosure Management is an out-of-the-box, pre-packaged XBRL solution to meet statutory reporting requirements. This product is really going to help companies improve the timeliness of producing financial reports. Reports can be authored using MS Word and Excel and then XBRL instance documents can be produced with its embedded XBRL tags. It even supports footnotes and disclosures of non-financial information. With a product like this, companies no longer have to outsource their XBRL filing; they can bring it back in house to save costs and time. In terms of other enhancements, they have ERP Integrator that provides integration and drill downs from Hyperion products to source systems, such as Oracle E-Business Suite, PeopleSoft, and SAP. No other vendor offers this level of integration. There's also a new product that links Oracle Essbase directly to Hyperion Financial Management for internal financial reporting, and new integrations between Hyperion Financial Management and Oracle's GRC products. They also improved the usability of Oracle Hyperion Planning. They made it much easier for end users to use the system via the web or via MS Excel when submitting plans and budgets. It is also integrated with intelligent approval workflows that are data-driven, user-configurable, and scenario-specific to efficiently streamline the budgeting process. Here's the press release from April 7, 2010. Here's the pre-recorded web cast where you can see the demos. Just register and watch the hour long presentation. And finally, here's the newsletter

    Read the article

  • SQLAuthority News – Download Whitepaper – Choosing a Tabular or Multidimensional Modeling Experience in SQL Server 2012 Analysis Services

    - by pinaldave
    Data modeling is the most important task for any BI professional. Matter of the fact, the biggest challenge is to organizing disparate data into an analytic model that effectively and efficiently supports the reporting and analysis. SQL Server 2012 introduces BI Semantic Model (BISM), a single model that can support a broad range of reporting and analysis while blending two Analysis Services modeling experiences behind the scenes. Multidimensional modeling – enables BI professionals to create sophisticated multidimensional cubes using traditional online analytical processing (OLAP). Tabular modeling – provides self-service data modeling capabilities to business and data analysts. As data modeling is evolving and business needs are growing new technologies and tools are emerging to help end users to make the necessary adjustment to the reporting and analysis needs. This white paper is will provide practical guidance to help you decide which SQL Server 2012 Analysis Services modeling experience – tabular or multidimensional. Do let me know what do is your opinion as a comment. In simple word – I would like to know when will you use Tabular modeling and when Multidimensional modeling? Download Choosing a Tabular or Multidimensional Modeling Experience in SQL Server 2012 Analysis Services Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >