Search Results

Search found 126374 results on 5055 pages for 'windows server 2003 r2'.

Page 64/5055 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Idea to develop a caching server between IIS and SQL Server

    - by John
    I work on a few high traffic websites that all share the same database and that are all heavily database driven. Our SQL server is max-ed out and, although we have already implemented many changes that have helped but the server is still working too hard. We employ some caching in our website but the type of queries we use negate using SQL dependency caching. We tried SQL replication to try and kind of load balance but that didn't prove very successful because the replication process is quite demanding on the servers too and it needed to be done frequently as it is important that data is up to date. We do use a Varnish web caching server (Linux based) to take a bit of the load off both the web and database server but as a lot of the sites are customised based on the user we can only do so much. Anyway, the reason for this question... Varnish gave me an idea for a possible application that might help in this situation. Just like Varnish sits between a web browser and the web server and caches response from the web server, I was wondering about the possibility of creating something that sits between the web server and the database server. Imagine that all SQL queries go through this SQL caching server. If it's a first time query then it will get recorded, and the result requested from the SQL server and stored locally on the cache server. If it's a repeat request within a set time then the result gets retrieved from the local copy without the query being sent to the SQL server. The caching server could also take advantage of SQL dependency caching notifications. This seems like a good idea in theory. There's still the same amount of data moving back and forward from the web server, but the SQL server is relieved of the work of processing the repeat queries. I wonder about how difficult it would be to build a service that sort of emulates requests and responses from SQL server, whether SQL server's own caching is doing enough of this already that this wouldn't be a benefit, or even if someone has done this before and I haven't found it? I would welcome any feedback or any references to any relevant projects.

    Read the article

  • Issue resolving names on Hyper-V guest with Routing and Remote Access

    - by John Sheehan
    I've got a Win2k8 standard server running Hyper-V with a Server 2003 web guest instance running. The host is publicly available on the internet. I've created an Internal Private network in the Hyper-V Virtual Network manager. I've set the host IP for that virtual adapter to 192.168.0.1. I've set the IP on the guest to 192.168.0.2. They can ping each other and share files. I can't browse the web on the guest though. NSLOOKUPs are working. I've tried setting the DNS server setting on the guest to 192.168.0.1 and something external like Google's 8.8.8.8 server to no avail. Windows firewall is disabled on the internal virtual network. I've tried it with both DNS installed on the host and without it. I'm not sure which RRAS/NAT settings are relevant to pass on so ask if you need me to clarify anything. How do I get outbound internet working on the guest VM?

    Read the article

  • Networkmapping script (VBS) Vista doesn't work, XP does

    - by The_cobra666
    Hi all, I've got a weird problem, (like always :p ) Okay: Situation: Windows 2003 domain with XP clients. With a GPO I'm running a VBS script on login to map a few drives. This works great on XP, but not on Vista. If I manually run the script after the user has logged on, it works. So I know the script works on Vista, it just doesn't run via the GPO. The user has admin privileges. I also have the same problem on Windows 7 RC1. So it must be related. The script: on error resume next Dim objNetwork Dim strDriveLetter, strRemotePath, strUserName strDriveLetter = "Z:" strRemotePath = "\\Onsgeluk.ons_geluk.local\Profieldoc" Set objNetwork = WScript.CreateObject("WScript.Network") strUserName = objNetwork.UserName objNetwork.RemoveNetworkDrive "Z:" objNetwork.MapNetworkDrive strDriveLetter, strRemotePath _ & "\" & strUserName objNetwork.RemoveNetworkDrive "X:" objNetwork.MapNetworkDrive "X:" , "\\Onsgeluk.ons_geluk.local\Data" objNetwork.RemoveNetworkDrive "Y:" objNetwork.MapNetworkDrive "Y:" , "\\Onsgeluk.ons_geluk.local\Mappen\hoofdverpleging" Does anyone have a clue? Thanks in advance guys (and girls) ps: sorry for my bad english!

    Read the article

  • Is SQL Server 2008 R2 a full release of SQL Server?

    - by AaronBertrand
    This has come up in conversations more than once in the past little while - recently on twitter I made the casual comment that later this year, SQL Server 2008 will be "two versions old." Well, not everyone agrees that that is technically true. So, I thought I'd put something out there that isn't limited to 140 characters. There are certainly some valid arguments on both sides, but my opinion - based both on these facts and on my memory that Microsoft has marketed it as such - is that SQL Server...(read more)

    Read the article

  • Exchange 2010: Find Move Request Log after move request completes

    - by gravyface
    EDIT: significantly changed my question here to streamline it a bit. I've gone ahead and used 100 as my corrupted item count and ran it from the Exchange Shell. So the trail of tears continues with my SBS 2003 to 2011 migration: all the mailboxes have moved mailbox store from OLDSERVER to NEWSERVER, with the Local Move Requests completing successfully, except for one. What I'd like to do now is review the previous move request log files: when they were in progress, I could right-click Properties Log View Log File, but now that they're completed, that's not available. Nor can I use: Get-MoveRequestStatistics <user> -includereport | fl MoveReport ...as the move request has now completed and it errors out with "couldn't find a move request that corresponds...". Basically what I'd like to do is present the list of baditems to the user so that they're aware of what items didn't come across and if anything important was lost, be able to check their current OST, an archive.pst, etc. to recover it if possible. If this all needs to be wrapped up in a batch Exchange power shell command to pipe the output to log files on disk somewhere, I'm all ears, and would appreciate it for the next migration we do.

    Read the article

  • Slow File Copy observed copying 40GB files across network to iSCSI device

    - by Rick
    Here's a curious ones for the gurus: Setup: Source Machine: Windows Server 2003 R2 machine with local hard drive. VHD file of 40GB. 1 x 1Gbps network card, Cat6 cable, switch. Target Machine: Windows Server 2008 R2 machine with iSCSI connection to iSCSI target on separate machine (1TB, RAID5). 1 x 1Gbps network card, Cat6 cable, connected to same switch as for Source Machine. Second 1Gbps network card, Cat6 cable, connected via isolated switch to the iSCSI target. Switches are Netgear JGS524 model (web managed). If I copy from the Win2003R2 machine to Win2008R2 machine local drive I get 40GB in 45 minutes, 36 seconds. If I copy from the Win2008R2 machine to the iSCSI target (local drive to iSCSI target) I get 40GB in 37 minutes 56 seconds. If I copy from the Win2003R2 machine to the iSCSI target via the Win2008R2 machine I get 40GB in 3 hours, 50 minutes, 24 seconds. All copies were done via the following command issued on the Win2008R2 box: XCOPY <source> <target> /J XCOPY /J - Copies using unbuffered I/O. Recommended for very large files. So, what's the bit I'm missing here? Why does a back-to-back copy take in total 1 hour, 23 minutes, 32 seconds when a "straight through" copy take almost 3 times as long? Switches show no errors, network hovers around the 3% utilisation mark for the duration of the copy (whereas the "back-to-back" copies are around the 25% utilisation mark). What have I missed?

    Read the article

  • Allowing XP Home Clients To Access Active Directory Printers

    - by Sean M
    My school's network is based on Active Directory on Windows Server 2003 servers. Most of the computers in the school are members of the domain. However, we also acquired a passel of netbooks that are running Windows XP Home (as netbooks tend to), and we're trying to make those useful. The netbooks are made available to students by check-out, so none of them are dedicated to a specific user. I only want to allow the netbooks to do two significant network activities: to access the Internet (this is working acceptably well so far), and to print to one or more printers on the network. That second one is where trouble starts. I'm trying to find a way to allow the XP Home clients to access those Active Directory printers. All the solutions that I can come up with right now are expensive, ugly, or both - for example, changing the OS on the netbooks (even with imaging, that would take a lot of my time) or making sure that the user account on each netbook has a matching account in Active Directory with permissions for printing (invites security/maintainability disaster). Are there any elegant solutions? Failing that, what's the best ugly solution for allowing my students to print from the netbooks?

    Read the article

  • Word Document Turns to Read-Only

    - by Psycho Bob
    I am running into an issue with a user whose Word document is somehow turning itself into Read-Only. The user is using Word 2003 and is accessing a document that is in a Server 2008 share. The document itself starts out as a normal, editable document (user has Full Control permissions), and the user is able to save and do the 'normal' things you would do to a document. However, after a couple of saves, the document turns to Read-Only (according to the title bar) even though the Read-Only attribute is not checked on the document's properties. Here is some additional information about the situation: *User has approximately 5-8 Word documents open at a time *User saves the document frequently (sometimes at a frequency of once per minute) *Once the document is closed it will open as a normal document if reopened *When the document does turn to Read-Only the user will do a "Save As" on the document and save it as FILENAME # where # is some increment of how many times this has happened (some documents are up to their 30th iteration) I understand that there is probably some room for user education here and that they could just be copying the RO document to a new one, closing and opening the RO doc, then copying all the information back. However, I would like to get to the route cause of the problem and try to stop it from happening in the first place. UPDATE: Apparently the reinstall did not fix the issue. I researched the issue a bit more and found that disabling the background save may take care of it, but I haven't had a chance to try it yet. Does anyone else have any other ideas?

    Read the article

  • Time sync fails on Hyper-V VM, but succeeds when I log in as a domain user

    - by Richard Beier
    We have a Windows Server 2003 SP2 VM running on Hyper-V (Server 2008 R2 host). The VM has Hyper-V time synchronization enabled. I noticed that the time on the VM was fast by around 25 minutes. I saw the following in the event log: The time provider NtpClient is configured to acquire time from one or more time sources, however none of the sources are currently accessible. No attempt to contact a source will be made for 15 minutes. NtpClient has no source of accurate time. The time provider NtpClient cannot reach or is currently receiving invalid time data from ourdc.ourdomain.local (ntp.d|192.168.2.18:123-192.168.2.2:123). Time Provider NtpClient: No valid response has been received from domain controller ourdc.ourdomain.local after 8 attempts to contact it. This domain controller will be discarded as a time source and NtpClient will attempt to discover a new domain controller from which to synchronize. I had been logged in as a local user. (We have an old app that runs on this VM - it requires a user to be logged in at all times, and we use a non-domain user account for this.) When I logged in as a domain user, the clock almost immediately corrected itself. Running "w32tm /monitor" and "net time" as the domain user showed no errors, and indicated that our domain controller was the time source. Does anyone know what might cause this, and why logging in under a domain account fixes the problem? I'm wondering if the time will start to drift again. Thanks for your help, Richard

    Read the article

  • Network Security Device/Software

    - by Campo
    We currently run Symantec Antivirus Corporate 10.2. The software is really easy to manage on a network but the actual virus detection isn't bad but the malware detection is crap. We recently were infected with a email bot that got us put on some block lists. This has been resolved. I cannot have that happen again. I would like to find a program as easy to manage as symantec that I can install on all the user's workstations as well as the servers. We run a windows 2003 domain. We have a couple 2008 test servers in the environment. Most of the workstations are xp though I am using windows 7 and symantect is not compatible with this OS... So we need a solution that would cover all those operating systems. If it could be installed on macs too that would be a bonus though not necessary at all. This software must detect: Viruses AND Malware I am looking for something that combines the features in anti-malware programs like malwarebytes or spybot with an antivirus program like symantec or AVG. Alternatively if there is a piece of hardware that is a firewall, router, and packet inspection for virus/spam that would be the most ideal solution. I then could supplement with a piece of software that could pickup what the hardware misses. Thank you for your suggestions.

    Read the article

  • How to connect the virtual networks of vmware guests running on different hosts?

    - by gyrolf
    In a test setup, we are running several virtual machines on a single vmware workstation host. All virtual machines are connected via a "host only" network. This runs fine up to 2 or 3 virtual machines (depending on the host hardware). To allow more virtual machines, we want to use more host machines. Details about the environment and applications: Host PCs are running Windows XP in a corporate intranet. VMware used is Workstation 6.5 Guests are running Windows Server 2003 All guests act as Web Servers One of the guests additionally acts as Windows File server, offering shared folders for the other guests to connect to. Restrictions: VMware guests shall not be visible from the intranet. Changes to the host PC are restricted by corporate policy. In the virtual network, no domain controller exists. All virtual machines are member of the same workgroup. Running the virtual network as NAT is possible. Port forwarding might be used if it does not conflict with ports used by the host PC. Looking for a solution, I found hints about using router or vpn software on the hosts, but without any details how to setup. (I found a similar question Sharing the network between 2 VMware hosts, but the answer was not sufficient for me.)

    Read the article

  • What should the memory configuration be?

    - by AngryHacker
    We have a server (ProLiant DL585 G1 by HP), which hosts Windows 2003 x64 R2 with SQL Server 2005 x64 and a host of other apps. It currently has 6GB of RAM. We are currently very memory constrained and it's clear that we need to get more memory. 8GB will probably do the trick, however, we are not sure as to what memory configuration will give us the biggest performance buck. Currently all 8 memory slots are filled (4 slots have 1GB chip, while the other 4 slots have 512MB chips). Should we throw the 512MB sticks away and just replace them all with 1GB sticks? If we decided to go with a higher memory configuration (e.g. 10GB or 12GB or 16GB), is it advisable to keep all the sticks of the same size or it does not matter? I was once told that interleaved memory requires (for better performance) that memory should be in multiples (e.g. 2 or 4 or 8 or 16, etc...). I am not even sure that the server has an interleaved configuration (and don't know how to find out), but is this true? Thanks.

    Read the article

  • WS 2008 R2 giving "Internal Server Error"

    - by dragon112
    I have had this problem for a while now and can't find the problem at all. When i open a page it will sometimes give a 500 Internal Server Error message. This hapens on a website that works perfectly but when i try to upload anything it will give this message(all php settings have been set to either 1gb or 3000 seconds as well as the iis headers). Also when i open a simple page which does nothing more than include another php page and include a couple of classes the error will occur. I have no idea what causes this error and would love to hear from any of you on what this could be. I checked the server logs and for the upload issue i found this error: The description for Event ID 1 from source named cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: managed-keys-zone ./IN: loading from master file managed-keys.bind failed: file not found the message resource is present but the message is not found in the string/message table Regards, Dragon

    Read the article

  • Suggestions for Windows 8 migration [closed]

    - by Big Endian
    I'm thinking of migrating to Windows 8. At first I hated it, but I'm pretty sure the Windows 8 model is the future, and I don't particularly want to end up hating the future like my parents, frustrated and bewildered by anything past Windows XP. I'm currently running Windows 7 and my system has been accumulating some problems. It's probably an accumulation of issues from installing too much software, changing firewall settings, installing Ubuntu alongside Windows, and... well I'm not sure, but my computer has been buggy in unexpected ways lately (freezing and unfreezing, display driver crashing and recovering, and what I call "deep freeze/thaw cycle" where the mouse won't even move for a while). I'm good at solving computer problems, but I can't seem to get to the root of these and my best idea for fixing them is making sure I've backed up every file then re-installing the entire OS. Luckily for me, a new OS is just around the corner so this would be a good time to get two things out of the way at once. The problem I see is that the upgrade options I see are all "seamless". I don't want a seamless upgrade. I want to wipe the slate clean and start all over. Does this mean I will have to buy a full, new copy of Windows 8 rather than one of the cheaper upgrading options? Or does it not make since for me to go to Windows 8 given that I have a laptop, not a tablet? Maybe I should just re-install Windows 7, or even call good enough good enough, try to eliminate the bugs, and start with a fresh slate in 2-3 years after this computer eventually dies entirely from (inevitable) hardware failure. What would be the advantages or disadvantages and costs of each option, how would I go about upgrading to Windows 8 if that's the option I choose, and what is your personal opinion about my situation?

    Read the article

  • weird routes automatically being added to windows routing table

    - by simon
    On our windows 2003 domain, with XP clients, we have started seeing routes appearing in the routing tables on both the servers and the clients. The route is a /32 for another computer on the domain. The route gets added when one windows computer connects to another computer and needs to authenticate. For example, if computer A with ip 10.0.1.5/24 browses the c: drive of computer B with ip 10.0.2.5/24, a static route will get added on computer B like so: dest netmask gateway interface 10.0.1.5 255.255.255.255 10.0.2.1 10.0.2.5 This also happens on windows authenticated SQL server connections. It does not happen when computers A and B are on the same subnet. None of the servers have RIP or any other routing protocols enabled, and there are no batch files etc setting routes automatically. There is another windows domain that we manage with a near identical configuration that is not exhibiting this behaviour. The only difference with this domain is that it is not up to date with its patches. Is this meant to be happening? Has anyone else seen this? Why is it needed when I have perfectly good default gateways set on all the computers on the domain?!

    Read the article

  • How do I secure Sql Server 2008 R2

    - by Mark Tait
    I have both a dedicated and a VPS (from Fasthosts) virtual server - the web sites/applications I run on these, access Sql Server stored on the same web server. Until now, I have logged onto Sql Server on both the deidicated and VPS server, from Sql Server Management Studio - until I noticed in my server application logs, multiple attempts to logon to Sql Server using the 'sa' username, but failed password. So someone/bot is trying hard (repeatedly every couple of hours, for approx 20 attempts during each instance) to log on... so obviously I have to lock down access to Sql Sever remotely. What I have done is gone into Configuration Manager, and in Sql Server Network Configuration - Protocols for Sql2008 and also in Sql Native Client 10.0 Configuration - Client Protocols - I have diabled Named Pipes, TCP/IP (and VIA by default). I have left Shared Memory enabled. I also disabled in Sql Server Services, the Sql Server Browser. Now the only way I can manage the databases on these servers, is by logging on to them via Remote Desktop. Can anyone confirm if this is the correct way of stopping anyone maliciously logging on to Sql Server? (I'm not a DBA or security expert - and there are hundreds of articles advising all different ways - but I was hoping for the experts here to confirm, or otherwise, if what I've done is correct) Thank you, Mark

    Read the article

  • Slow login to load-balanced Terminal Server 2008 behind Gateway Server

    - by Frans
    I have a small load-balanced (using Session Broker) Terminal Server 2008 farm behind a Gateway Server which is accessed from the Internet. The problem I have is that there is a delay of 20-30 seconds if the session broker switches the user to another server during login. I think this is related to the fact that I am forcing the security layer to be RDP rather than SSL. The background The Gateway server has a public routeable IP addres and DNS name so it can be accessed from the Internet and all users come in via this route (the system is used to provide access to hosted applications to external customers). The actual terminal servers only have internal IP addresses. This works really well, except that with a Vista or Windows 7 client, the Remote Desktop client will negotiate with the server to use SSL for the security layer. This then exposes the auto-generated certificate that TS1 or TS2 has - but since they are internal, auto-generated certificates, the client will get a stern warning that the certificate is not valid. I can't give the servers a properly authorised certificate as the servers do not have public routeable IP address or DNS name. Instead, I am using Group Policy to force the connections to be over RDP instead of SSL. \Computer Configuration\Policies\Administrative Templates\Windows Components\Terminal Services\Terminal Server\Security\Require use of specific security layer for remote (RDP) connections The Windows 7 user now gets a much less stern warning that "the server's identity cannot be confirmed" which I can live with. I don't have enough control over the end-user's machines to ask them to install a new root certificate either. TS1 and TS2 are also load-balanced using the Session Broker, which is installed on the Gateway Server. I am using round-robin DNS, so the user's initial connection will go via Gateway1 to either TS1 or TS2. TS1/TS2 will then talk to the session broker and may pass the user to the other server. I.e. the user may get connected to TS2, but after talking to the session broker the user may be passed to TS1, which is where they will run their session. When this switching of servers happens, in my setup, the screen sits with the word "Welcome" for 20-30 seconds after which it flickers, Welcome is shown again and then flashing through nthe normal login screens (i.e. "wait for user profile manager" etc). Having done some research, I think what is happening is that the user is being fully logged on to TS2 (while "Welcome" is shown) before being passed to TS1, where they are then logged in again. It is interesting that normally when you see the ""Welcome" word, the little circle to left rotates. However, it does not rotate during this delay - the screen just looks frozen. This blog post leads me to think that this is because CredSSP is not being used, probably because I am disallowing SSL and forcing RDP. What I have tried I enabled SSL again which removes the "Welcome" delay. However, it seems to introduc a new delay much earlier in the process. Specifically, when the RDP client is saying "initialising connection" - this is now much slower. Quite apart from the fact that my certificate problem precludes me using that solution without considerable difficulty. I tried disabling the load balancing (just remove the servers from the session broker farm) and the connections do not have any delay. The problem is also intermittent in the sense that it only happens when the user gets bumped from one server to another. I tested this by trying to connect directly to TS1 (via the Gateway, of course) and then checking which server I actually got connected to. Just to be sure, I also by-passed the round-robin DNS to see if it had any impact and it doesn't. The setup is essentially in line with MS recommendations here: TS Session Broker Load Balancing Step-by-Step Guide I tried changing to using a dedicated redirector. Basically, rather than using a round-robin DNS, I pointed my DNS to the Gateway server and configured it to be a dedicated redirector (disallow logons, add it to the farm). Same problem, alas. Any ideas or suggestions gratefully received.

    Read the article

  • Introducing Microsoft SQL Server 2008 R2 - Business Intelligence Samples

    - by smisner
    On April 14, 2010, Microsoft Press (blog | twitter) released my latest book, co-authored with Ross Mistry (twitter), as a free ebook download - Introducing Microsoft SQL Server 2008 R2. As the title implies, this ebook is an introduction to the latest SQL Server release. Although you'll find a comprehensive review of the product's features in this book, you will not find the step-by-step details that are typical in my other books. For those readers who are interested in a more interactive learning experience, I have created two samples file for download: IntroSQLServer2008R2Samples project Sales Analysis workbook Here's a recap of the business intelligence chapters and the samples I used to generate the screen shots by chapter: Chapter 6: Scalable Data Warehousing covers a new edition of SQL Server, Parallel Data Warehouse. Understandably, Microsoft did not ship me the software and hardware to set up my own Parallel Data Warehouse environment for testing purposes and consequently you won't see any screenshots in this chapter. I received a lot of information and a lot of help from the product team during the development of this chapter to ensure its technical accuracy. Chapter 7: Master Data Services is a new component in SQL Server. After you install Master Data Services (MDS), which is a separate installation from SQL Server although it's found on the same media, you can install sample models to explore (which is what I did to create screenshots for the book). To do this, you deploying packages found at \Program Files\Microsoft SQL Server\Master Data Services\Samples\Packages. You will first need to use the Configuration Manager (in the Microsoft SQL Server 2008 R2\Master Data Services program group) to create a database and a Web application for MDS. Then when you launch the application, you'll see a Getting Started page which has a Deploy Sample Data link that you can use to deploy any of the sample packages. Chapter 8: Complex Event Processing is an introduction to another new component, StreamInsight. This topic was way too large to cover in-depth in a single chapter, so I focused on information such as architecture, development models, and an overview of the key sections of code you'll need to develop for your own applications. StreamInsight is an engine that operates on data in-flight and as such has no user interface that I could include in the book as screenshots. The November CTP version of SQL Server 2008 R2 included code samples as part of the installation, but these are not the official samples that will eventually be available in Codeplex. At the time of this writing, the samples are not yet published. Chapter 9: Reporting Services Enhancements provides an overview of all the changes to Reporting Services in SQL Server 2008 R2, and there are many! In previous posts, I shared more details than you'll find in the book about new functions (Lookup, MultiLookup, and LookupSet), properties for page numbering, and the new global variable RenderFormat. I will confess that I didn't use actual data in the book for my discussion on the Lookup functions, but I did create real reports for the blog posts and will upload those separately. For the other screenshots and examples in the book, I have created the IntroSQLServer2008R2Samples project for you to download. To preview these reports in Business Intelligence Development Studio, you must have the AdventureWorksDW2008R2 database installed, and you must download and install SQL Server 2008 R2. For the map report, you must execute the PopulationData.sql script that I included in the samples file to add a table to the AdventureWorksDW2008R2 database. The IntroSQLServer2008R2Samples project includes the following files: 01_AggregateOfAggregates.rdl to illustrate the use of embedded aggregate functions 02_RenderFormatAndPaging.rdl to illustrate the use of page break properties (Disabled, ResetPageNumber), the PageName property, and the RenderFormat global variable 03_DataSynchronization.rdl to illustrate the use of the DomainScope property 04_TextboxOrientation.rdl to illustrate the use of the WritingMode property 05_DataBar.rdl 06_Sparklines.rdl 07_Indicators.rdl 08_Map.rdl to illustrate a simple analytical map that uses color to show population counts by state PopulationData.sql to provide the data necessary for the map report Chapter 10: Self-Service Analysis with PowerPivot introduces two new components to the Microsoft BI stack, PowerPivot for Excel and PowerPivot for SharePoint, which you can learn more about at the PowerPivot site. To produce the screenshots for this chapter, I created the Sales Analysis workbook which you can download (although you must have Excel 2010 and the PowerPivot for Excel add-in installed to explore it fully). It's a rather simple workbook because space in the book did not permit a complete exploration of all the wonderful things you can do with PowerPivot. I used a tutorial that was available with the CTP version as a basis for the report so it might look familiar if you've already started learning about PowerPivot. In future posts, I'll continue exploring the new features in greater detail. If there's any special requests, please let me know! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ColdFusion Server crash after thousands of HTTP requests

    - by Jason Bristol
    We are running ColdFusion 8 on a windows server 2003 VPS with an API that exposes student records to a partner API through a connector. Our API returns around 50k student records serialized in XML format pretty seamlessly. My question originates when something very frightening happened today when we tested our connector to our partners API. Our entire website and web host went down. We assumed that our host was just having some issues and after 4 hours with no resolution and no response from their customer service we finally got a response from them claiming that they had an "unauthorized user" in their network. After our server was back up we were unable to connect to our website as if the web service or coldfusion itself had froze. This is really where my concern comes from as I fear we may have overloaded the web service. As I mentioned before we tried sending over 50k HTTP POST requests over to our partner's API, however everything stopped after around 1.6k Is this bad practice or is there some sort of rate limiting I can relax somewhere in server configuration? We managed to find a workaround, but it bypasses our connector which is critical to our design. This would have been a one time deal as the purpose of so many requests was to populate our partner's website with current data, after that hourly syncs will keep requests down to around 100 per hour. UPDATE Our partner API is owned and operated by Pardot. We are converting students to prospects by passing student data to their API which unfortunately only seems to accept one student at a time. For that reason we have to do all 50k requests individually. Our server has 4GB of RAM, an Intel Core 2 Duo @ 2.8GHz running Windows Server 2003 SP2. I monitored the server during a 100 student sync, a 400 student sync, and a 1.4k student sync with the following results: 100 students - 2.25GB of Memory, 30-40% CPU utilization, 0.2-0.3% network bandwidth 400 students - 2.30GB of Memory, 30-50% CPU utilization, 0.2-1.0% network bandwidth 1.4k students - 2.30GB of Memory, 30-70% CPU utilization, 0.2-1.0% network bandwidth I know this is a far cry from 50k students, but I don't want to risk taking down our CMS system again assuming that was the cause. To give you a look at our code: <cfif (#getStudents.statusCode# eq "200 OK")> <cftry> <cfloop index="StudentXML" array="#XmlSearch(responseSTUD,'/students/student')#"> <cfset StudentXML = XmlParse(StudentXML)> <cfhttp url="#PARDOT_CMS_UPSERT#" method="post" timeout="10000" > <cfhttpparam type="url" name="user_key" value="#PARDOT_CMS_USERKEY#"> <cfhttpparam type="url" name="api_key" value="#api_key#"> <cfhttpparam type="url" name="email" value="#StudentXML.student.email.XmlText#"> <cfhttpparam type="url" name="first_name" value="#StudentXML.student.first.XmlText#"> <cfhttpparam type="url" name="last_name" value="#StudentXML.student.last.XmlText#"> <cfhttpparam type="url" name="in_cms" value="#StudentXML.student.studentid.XmlText#"> <cfhttpparam type="url" name="company" value="#StudentXML.student.agencyname.XmlText#"> <cfhttpparam type="url" name="country" value="#StudentXML.student.countryname.XmlText#"> <cfhttpparam type="url" name="address_one" value="#StudentXML.student.address.XmlText#"> <cfhttpparam type="url" name="address_two" value="#StudentXML.student.address2.XmlText#"> <cfhttpparam type="url" name="city" value="#StudentXML.student.city.XmlText#"> <cfhttpparam type="url" name="state" value="#StudentXML.student.state_province.XmlText#"> <cfhttpparam type="url" name="zip" value="#StudentXML.student.postalcode.XmlText#"> <cfhttpparam type="url" name="phone" value="#StudentXML.student.phone.XmlText#"> <cfhttpparam type="url" name="fax" value="#StudentXML.student.fax.XmlText#"> <cfhttpparam type="url" name="output" value="simple"> </cfhttp> </cfloop> <cfcatch type="any"> <cfdump var="#cfcatch.Message#"> </cfcatch> </cftry> </cfif> UPDATE 2 I checked the CF logs and found a couple of these: "Error","jrpp-8","06/06/13","16:10:18","CMS-API","Java heap space The specific sequence of files included or processed is: D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm, line: 675 " java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.io.CharArrayWriter.write(CharArrayWriter.java:105) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:37) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:50) at coldfusion.runtime.NeoBodyContent.write(NeoBodyContent.java:254) at cfapi2ecfm292155732._factor30(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:675) at cfapi2ecfm292155732._factor31(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:662) at cfapi2ecfm292155732._factor36(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:659) at cfapi2ecfm292155732._factor42(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:657) at cfapi2ecfm292155732._factor37(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor44(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:456) at cfapi2ecfm292155732._factor38(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor46(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:455) at cfapi2ecfm292155732._factor39(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor47(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:453) at cfapi2ecfm292155732.runPage(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:1) at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:192) at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:366) at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65) at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279) at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48) at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40) at coldfusion.filter.PathFilter.invoke(PathFilter.java:86) at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70) at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28) at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38) at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46) at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38) at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22) at coldfusion.CfmServlet.service(CfmServlet.java:175) at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89) at jrun.servlet.FilterChain.doFilter(FilterChain.java:86) Looks like I might have crashed the JVM in CF, is there a better way to do this? We are thinking of just exporting all records initially as a CSV file and importing it into Pardot seeing as we will never have to do a request this large again.

    Read the article

  • Speaker at the German Visual FoxPro Developer Conference 2003

    The following is an excerpt from the UniversalThread conference coverage of the German Visual FoxPro Developer Conference 2003 written by Hans-Otto Lochmann and Armin Neudert. Track: Visual FoxPro and Linux This track consists of 4 sessions presented on one day in one sequence. Originally the Linux portion of this track was to be presented by Whil Hentzen, the well-known publisher, book author and confer-ence speaker. Unfortunately some illness prevented him from joining this DevCon. Rainer got the bad news only on early Friday morning. It was definitely to late to find a replacement among the already invited speaker on such a short notice. So Rainer decided to take over these "three sessions in a row" by himself with "a little help from his friends". He hired a coach for him for the weekend and prepared slides and sessions by himself - the originally planed slides and session material were still in USA. Rainer survived barely an endless disaster of C0000005's due to various wrong configuration settings... At the presentation Jochen Kirstätter helped massively with technical details regarding Linux whereas Rainer did the slides and the presentation. Gerold Lübben then presented the MySQL part - as originally planned. This track concentrated on the how to run Visual FoxPro applications on Linux machines with the help of a Windows emulator like Wine. As more and more people use Linux machines in production (and not just for running servers), more and more invitations to bid for a development job includes the requirement to run the application in a Linux environment. If you would like to participate in such submissions, then you should get familiar with the open source operating system Linux and the open source Data Base system MySQL. [...] These sessions provided a broad, complete overview of where Linux fits into the current computing landscape from the perspective of a VFP developer, where VFP can be used with Linux, and a conceptual plan for how to approach the incorporation of Linux into your day-to-day work. In order for you to be able to work with a Linux back end, you're going to need to know something about how Linux works. The best way involves a two-step process: First, plunk down a Linux workstation on your desk next to your Windows machine and develop some experience with the new OS.Second, once you have a basic level of comfort with Linux, gained through your experience on a workstation, leverage that knowledge and learn to connect to a Linux server from your Windows machine. This track showed both of these processes: What you can expect when you set up your Linux work-station, how to set it up, how to connect to your Windows network, how to fit VFP into the mix, and even how you could use it to replace your Windows workstation in some cases. Also this track demonstrated how to connect to an existing Linux server, running MySQL or an another back end, and how to get your VFP apps talking to that back end data. This track also showed both of the positions you can take. Rainer disliked it wholeheartedly (the bad guy position in these talks) and Jochen loved it (the good guy and "typical Linux techie"-position we all love). These opposite position lasted for three sessions and both sides where shown with their Pros and Cons in live and lively discussions of the speakers (club banging was forbidden). Gerold Luebben showed how Visual Foxpro and MySQL can work together. MySQL is as one the most well known open SOURCE databases for nearly all platforms available. Particularly in eBusiness MySQL is well positioned and well known for its performance and its stability. Still we like Visual FoxPro more - for sure . [...]

    Read the article

  • Windows 2008 R2 SMB / CIFS Logging to diagnose Brother MFC Network Scanning

    - by Steven Potter
    I am attempting to setup network scanning on a brother MFC-9970CDW printer. According to the Brother documentation, the printer is setup to connect to any CIFS network share. I applied all of the appropriate setting in the printer however I get a "sending error" when I try to scan a document. When I look at the logs of the 2008 R2 server that I am attempting to connect to; I can see in the security log where the printer successfully authenticates, however nothing else is logged. I would assume that immediately after the authentication, the printer is making a CIFS request and some sort of error is occurring, however I can't seem to find any way to log this information to find out what is going on. Is it possible to get Windows 2008 to log SMB/CIFS traffic? Followup: I installed Microsoft netmon and captured the packets associated with the transaction: 510 3:04:28 PM 7/9/2012 34.4277743 System 192.168.1.134 192.168.1.10 SMB SMB:C; Negotiate, Dialect = NT LM 0.12 {SMBOverTCP:30, TCP:29, IPv4:22} 511 3:04:28 PM 7/9/2012 34.4281246 System 192.168.1.10 192.168.1.134 SMB SMB:R; Negotiate, Dialect is NT LM 0.12 (#0), SpnegoToken (1.3.6.1.5.5.2) {SMBOverTCP:30, TCP:29, IPv4:22} 519 3:04:29 PM 7/9/2012 34.8986214 System 192.168.1.134 192.168.1.10 SMB SMB:C; Session Setup Andx, NTLM NEGOTIATE MESSAGE {SMBOverTCP:30, TCP:29, IPv4:22} 520 3:04:29 PM 7/9/2012 34.8989310 System 192.168.1.10 192.168.1.134 SMB SMB:R; Session Setup Andx, NTLM CHALLENGE MESSAGE - NT Status: System - Error, Code = (22) STATUS_MORE_PROCESSING_REQUIRED {SMBOverTCP:30, TCP:29, IPv4:22} 522 3:04:29 PM 7/9/2012 34.9022870 System 192.168.1.134 192.168.1.10 SMB SMB:C; Session Setup Andx, NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP, User: PRINTSUPOFF, Workstation: BRN001BA9AD1FE6 {SMBOverTCP:30, TCP:29, IPv4:22} 523 3:04:29 PM 7/9/2012 34.9032421 System 192.168.1.10 192.168.1.134 SMB SMB:R; Session Setup Andx {SMBOverTCP:30, TCP:29, IPv4:22} 525 3:04:29 PM 7/9/2012 34.9051855 System 192.168.1.134 192.168.1.10 SMB SMB:C; Tree Connect Andx, Path = \\192.168.1.10\IPC$, Service = ????? {SMBOverTCP:30, TCP:29, IPv4:22} 526 3:04:29 PM 7/9/2012 34.9053083 System 192.168.1.10 192.168.1.134 SMB SMB:R; Tree Connect Andx, Service = IPC {SMBOverTCP:30, TCP:29, IPv4:22} 528 3:04:29 PM 7/9/2012 34.9073573 System 192.168.1.134 192.168.1.10 DFSC DFSC:Get DFS Referral Request, FileName: \\192.168.1.10\NSCFILES, MaxReferralLevel: 3 {SMB:33, SMBOverTCP:30, TCP:29, IPv4:22} 529 3:04:29 PM 7/9/2012 34.9152042 System 192.168.1.10 192.168.1.134 SMB SMB:R; Transact2, Get Dfs Referral - NT Status: System - Error, Code = (549) STATUS_NOT_FOUND {SMB:33, SMBOverTCP:30, TCP:29, IPv4:22} 531 3:04:29 PM 7/9/2012 34.9169738 System 192.168.1.134 192.168.1.10 SMB SMB:C; Tree Disconnect {SMBOverTCP:30, TCP:29, IPv4:22} 532 3:04:29 PM 7/9/2012 34.9170688 System 192.168.1.10 192.168.1.134 SMB SMB:R; Tree Disconnect {SMBOverTCP:30, TCP:29, IPv4:22} As you can see, the DFS referral fails and the transaction is shut down. I can't see any reason for the DFS referral to fail. The only reference I can find online is: https://bugzilla.samba.org/show_bug.cgi?id=8003 Anyone have any ideas for a solution?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >