Search Results

Search found 958 results on 39 pages for 'limitations'.

Page 32/39 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Jquery - removing an image before the client browser attempts to download it

    - by ajbrun
    Hi there, I wonder if anyone could help me with a problem I've been having. I have a number of large images available, but due to space limitations, I can't create multiple copies of these at various sizes. I have used PHP GD functions to resize the images to the sizes I need and output them to the browser. This works, but obviously takes some processing time, which therefore impacts pages load times. I'm fine with this, but I only want to show the image once it's fully loaded, and have a loading gif in its place until that time. I'm using jquery to do this. The problem I'm having is making the page functional whether the client has javascript enabled or not. If JS is not enabled, I want standard img tags to be outputted, otherwise the images are removed and replaced with a loading gif until they have been fully loaded. The link below shows a simple non-javascript unfriendly example of a what I want to do (try turning JS off): http://jqueryfordesigners.com/demo/image-load-demo.php I've been testing the basics using the code below. The attr() function will be replaced with something like remove(). This is just a test to make something happen to the image before the browser tries to load it. $(document).ready(function() { $( "#Thumbnails .thumbnail img" ).attr('src', '#'); }); In IE, this works correctly - the image source is replaced with "#" BEFORE the client browser gets a chance to start downloading the image. In firefox however, it downloads the image, and THEN changes the source. It seems to me that firefox is loading the jquery onready event later than it should. As far as I know, this should be executed before the standard onload event and before anything has started loading. If it helps, I'm testing it with a good number of images on screen (81). Am I doing something wrong?

    Read the article

  • Project management software, available options

    - by canni
    Hey, sorry for posting this here, I know that this question better suites into SuperUser, but I would like to know answers from developers point of view. I have been using Indefero for project management etc. for some time, but I found that Indefero limitations are too big for my team. I'm searching project-management software that best suites this needs: Open-Source, but I can consider commercial apps GIT integration is mandatory, best if it can support multiple repos per project Time-tracking, good if it can have Gannt chart connected with issues etc. Issue, milestone, task tracking Good if it can be integrated with Gitosis, or have similar repository access control It must have an option, to setup on our own server Markdown syntax support is mandatory (or easy way to install plugin for this etc.) Issue tagging will be and advantage It will be used by developers team by 99% of time, but it has to have some simple interface, that clients can fill up bug reports etc. per project. It does not have to fill all this needs, but good if it can :) What options do You know, and can recommend?

    Read the article

  • Configuring visual subversion to exclude

    - by douglasrahn
    I have been searching for any documentation on how to exclude files for visual svn but have not found any. All the documentation I find seems to not match my file structure or I am missing some files/directories referenced. For example the only file I find with configuration items in it seems to be completely commented out and missing the miscellaneous section as well as any auto properties enable - etc... Ultimately I need to exclude some files so that my development can continue without SVN errors. I am constantly receiving errors for pbuser and other project files and would like to make sure this is not causing some of my other headaches. Here is information I would love to use but cant as it doesnt match: How to “fix” Subversion in XCode 3 Posted on December 10, 2008 by Rodney Aiglstorfer in Xcode If you don’t take the necessary steps to prepare for subversion, you will run into problems using it in XCode. This is because XCode produces files that “confuse” Subversion because it either thinks they are text files when they are really binary files or the reverse. To overcome these limitations, you need to make some simple changes to the subversion configuration file in your user home directory. Here are some steps you can follow to ensure that you will be able to use Subversion within XCode without any issues. Step 1. Open the subversion configuration file ~/.subversion/config NOTE: If the “.subversion” directory doesn’t exist yet then run this command which fails but will create the necessary files to get you started: svn status Step 2. Enable “global-ignores” and add new things to ignore Find the line that contains the text “global-ignores” and append the following text: build *~.nib *.so *.pbxuser *.mode* *.perspective* What I am looking for really is how to exclude the files I know I need to for Visual subversion - it shouldnt be different really from regular svn as it claims to use the same product just places a gui on it.

    Read the article

  • Creating 3rd party library in Android. Need suggestions!!

    - by Lalith
    Hello all, I am planning to create a 3rd party library for android application developers. Also the main concern is I do not want the users to make many changes to their code base. The following are the main things I want to clarify before diving into the project. 1) I want to attach my own event handlers dynamically to the application. As per above limitations I mentioned, I cannot make users put the relevant code in their handlers as I need to support existing applications as well. I found out that we cannot attach more than one event handler per one listener. For example Button.setOnclickListener(somelistener) .. will set one listener. In traditional desktop applications we can add more than one handlers but here if I do one more Button.setOnclickListener(someOtherListener) this one overwrites the previous listener. Is there any way I can attach my own listener along with application's existing listeners dynamically? Also is it really not supported on android yet? 2) Is there any 3rd party alternative that exists already that monitors user's interactions in androidi? 3) If this is not supported now are there any future plans to support this feature? Please let me know about this as I did not get much feedback on Stackoverflow. Any suggestions and workarounds would really help me continue with this project. Regards, Lalith

    Read the article

  • What is the best practice when using UIStoryboards?

    - by Scott Sherwood
    Having used storyboards for a while now I have found them extremely useful however, they do have some limitations or at least unnatural ways of doing things. While it seems like a single storyboard should be used for your app, when you get to even a moderately sized application this presents several problems. Working within teams is made more difficult as conflicts in Storyboards can be problematic to resolve (any tips with this would also be welcome) The storyboard itself can become quite cluttered and unmanageable. So my question is what are the best practices of use? I have considered using a hybrid approach having logical tasks being split into separate storyboards, however this results in the UX flow being split between the code and the storyboard. To me this feels like the best way to create reusable actions such as login actions etc. Also should I still consider a place for Xibs? This article has quite a good overview of many of the issues and it proposes that for scenes that only have one screen, xibs should be used in this case. Again this feels unusual to me with Apples support for instantiating unconnected scenes from a storyboard it would suggest that xibs won't have a place in the future but I could be wrong.

    Read the article

  • Open Source Licenses, which one?

    - by sam
    recently I created a java class " Custom Layout Manager ", which I want to make it open-source and distribute it. So it's not really a "product", nor a "complete program". Here's the list of permissions and specifications: You are free to use and modify with some limitations (packages and classes names, should remain the same, if you want another name, extend this class. - The .jar file, project name is ok to change). You don't need to share your modifications. You can't modify and then sell it to others. You can use it as part of your commercial software (For example: It's OK if: you created an instant messaging program, that uses my "Layout", since your "core bussiness" isn't the "Layout", but the msg program. It's NOT OK if: you created another "Layout" by extending it, added some features and sell it.) You can't remove the author's name nor the author's website address. You are free to donate. :D Basically, it's free and it's Ok as you give me credits and don't make money with it. I guess it might be a little bit complex, since you use it "commercially" but cannot sell it separately. I have seeked almost all the licenses, and the closest one was MIT license, but it says that you can sell it, so I don't really want to use this one. Is there any license that fits all these permissions I stated? Thanks.

    Read the article

  • Partitioning recommendations for a Proxmox VM Server (OpenVZ)

    - by luison
    We are new to virtualization and we are planning to turn our online server into a virualized one, mainly for maintenance, backup and recovery improvements. Initially we would only have one real virtual system with load plus 1-3 copys for testing and recovering and maybe a small centralized syslog virtual machine. We would like, if possible the host machine to include an iptables plus rsync to back up to other machines and some other global security systems. Due to this and the offerings of our hosting supplier we are mainly considering Proxmox for its simplicity (we like the idea of its web admin panel) and as I also understand that the container approach of OpenVMZ systems may fit well resource wise with our setup. The base system comes with debian so we can personalise it to our requirements. Proxmox installations default installs an LVM partition for the VMs. Our doubts are with the fact of what would be the best partition structure for this considering that: we would like to have a mirror of the root partition we could boot from if required (our provider supports booting the system from another partition via control panel) we ideally would like to have a partition that could be shared among the VM systems. We still don't know if this is possible directly with OpenVMZ containers, otherwise we are considering doing this by sharing it via NFS on the host machine. we want to use the backup system available on the proxmox host administrator to programme VMs backups and then rsync it to another machine. With this based on a Linux Raid of aprox (750Gb) we are considering something like: ext3_1/ - (20Gb) ext3_2/bak_root - (20Gb) mostly unmounted, root partition sync LVM_1 /var/lib/vz - (390Gb) partition for virtual images LVM_2 /shared_data - (30Gb) LVM_3 /backups - (300Gb) where all backups would be allocated Our initial tests with Proxmox seem to have issues with snapshots backups like this, perhaps caused by the fact that they can not be done to another LVM partition (error: command 'lvcreate --size 1024M --snapshot --name vzsnap-ns204084.XXX.net-0 /dev/pve/LV' failed with exit code 5) in which case we might have to use a standart ext3 partition (but unsure if we can do this with the 4 primary partition limitations). Does this makes more or less sense? Would it be mad to for example write VMs /var/logs to a NFS mounted partition (on the host system)? Are their any other easier ways to mount host system partitions (or folders) to the VMs?

    Read the article

  • Excel 2007 VBA macros don't work in Parallels

    - by MindModel
    I've got a complex Excel spreadsheet I need to use at work. My colleagues use the spreadsheet on Windows PC's, with no special configuration required. I want to run it on a MacBook Pro running Snow Leopard. The spreadsheet contains VBA macros which connect to external Oracle db's over the Internet. If I understand correctly, Excel on the Mac doesn't run VBA macros, so I have to use Parallels. I installed Parallels on the Mac and it's running correctly, as far as I can tell. I installed Excel 2007 under Parallels. I can open the Excel spreadsheet in Parallels and click buttons in the spreadsheet to run macros, but the macros fail with compiler errors. I don't have the password to the source code for the VBA macros, and if possible, I don't want to dig in to the code at that level. I know that there are quite a few things that could go wrong, and examining the VBA code might help, but I'm hoping to solve the problem without going down that road. The spreadsheet runs without any special configuration on Windows, so I'm wondering if anyone out there knows of any limitations of Excel VBA macros under Parallels, or anything else I could do to get this spreadsheet working. It's the only thing that's keeping me from using this MacBook Pro at work. Here is the error message: Compile error in hidden module: clsXXXXx0020Toolx0020Ser. This error commonly occurs when code is incompatible with the version, platform, or architecture of this application. Click Help for more info. Compile error in hidden module: A protected module contains a compilation error. Because the error is in a protected module it cannot be displayed. This error commonly occurs when code is incompatible with the version or architecture of this application (for example, code in a document targets 32-bit Microsoft Office applications but it is attempting to run on 64-bit Office). This error has the following cause and solution: Cause of the error: The error is raised when a compilation error exists in the VBA code inside a protected (hidden) module. The specific compilation error is not exposed because the module is protected. Possible solutions: If you have access to the VBA code in the document or project, unprotect the module, and then run the code again to view the specific error. If you do not have access to the VBA code in the document, then contact the document author to have the code in the hidden module updated.

    Read the article

  • All the Gear and No Idea: Suggestions for re-designing my home/office/entertainment network

    - by 5arx
    Help/ Advice/ Suggestions please: I have a load of kit that I love but which currently operate in disconnected, sometimes counter-productive way. Because I never really had a masterplan I just added these things one after another and connected them up in ad hoc ways. Since I bought my Macbook I've found I spend much less time on the MacPro that was until then my main machine. Perversely, as my job involves writing .Net software, I spend a lot of Mac time actually inside a Windows 7 VM. I stream media from the HP box to the PS3 and thus to the TV, but its not without its limitations/annoyances. We listen to each other's iTunes libraries but the music files are all over the place and it would be good to know they were all safely in one location (and fully backed up). I need to come up with a strategy that will allow me to use all the kit for work, play (recording live music, making tunes, iMovie work), pushing/streaming media to the TV and sharing files with my other half (she uses a Windows laptop and her iPod touch). Ideally I'd like to be able to work on any of the machines and have a shared homedrive that was visible to all machines so all my current files were synced up wherever i was. It would be great if I could access everything securely and quickly over the web. I'd also like to be able to set up a background backup process. The kit list thus far: Apple MacPro 8GB/3x250GB RAID0 + 1TB Apple MacBook Pro 13" 8GB/250GB - I spend a lot of my work time on a Windows 7 VM on this. Crappy Acer laptop (for children's use - iPlayer, watching movies/tv files) HP Proliant Server 4GB/80GB+160GB+300GB Sun Ultra 10 2 x 80GB (old, but in top-notch condition) PS3 160GB iPod Classic 2 x 8GB iPod Touch Observations: Part of the problem is our dual use of Windows and OS X - we can't go for a pure NT style roaming profile. Because the server is also used for hosting test/beta applications and a SQL Server db, it can't be dedicated to file serving. The two Macs really could do with sharing a roaming profile or similar. I'd love to be able to do something useful with the Ultra 10. My other half has been trying to throw it away for over five years now and regularly ask what function it serves in my study :-( I've got no shortage of 500GB external USB hard drives iMovie files are very large and ideally would be processed on a RAID system. Apple's TimeMachine isn't so great. If anyone could suggest all or part of a setup that would fulfil some of my requirements I'd be very grateful. I am willing to consider purchasing one or two more bits of kit (an Apple TV and a Squeezebox have been moted by friends) if they will help make efficiencies rather than add to the chaos and confusion. Thanks for looking.

    Read the article

  • Directory listing through FTPS (TLS) is not working

    - by Aron Rotteveel
    We recently switched our server to require TLS for every connection. This is working flawlessly so far, but one of our clients is having problems. Some facts: Server uses Pure-FTPD Server has a passive port range configured Server has no firewall limitations regarding the FTP Client uses WS FTP Client is behind a router Client connects to the same IP as every other, using PASSIVE mode All other clients have no trouble connecting Because of the TLS requirement, connecting using ACTIVE mode is almost not possible, but PASSIVE is working fine for everyone except this specific client. It seems that he is able to connect, but once a LIST command is performed, things go wrong. Log: Finding Host <clienthost> ... Connecting to <serverip:21> Connected to <serverip:21> in 0.020000 seconds, Waiting for Server Response Initializing SSL Session ... 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- 220-You are user number 5 of 50 allowed. 220-Local time is now 22:14. Server port: 21. 220-This is a private system - No anonymous login 220-IPv6 connections are also welcome on this server. 220 You will be disconnected after 15 minutes of inactivity. AUTH TLS 234 AUTH TLS OK. SSL session NOT set for reuse SSL Session Started. Host type (1): Automatic Detect USER <user> 331 User <user> OK. Password required PASS (hidden) 230-User <user> has group access to: <user> 230 OK. Current restricted directory is / SYST 215 UNIX Type: L8 Host type (2): Unix (Standard) PBSZ 0 200 PBSZ=0 PROT P 200 Data protection level set to "private" PWD 257 "/" is your current location CWD /public_html 250 OK. Current directory is /public_html PWD257 "/public_html" is your current location TYPE A 200 TYPE is now ASCII PASV 227 Entering Passive Mode (<serverip>,132,100) connecting data channel to <serverip>:132,100(33892) Substituting connection address <serverip> for private address <serverip> from PASV Using external address <customer ext. ip> instead of local address <customer int. ip> for PORT command PORT 82,161,56,225,195,181 200 PORT command successful LIST Error reading response from server. It appears that the connection is dead. Attempting reconnect... Any help is appreciated.

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • What are some of the best wireless routers for a price-conscious home power-user?

    - by Alain
    I'm extremely dissatisfied with the 'popular' choice for routers in homes and small offices. They are expensive (upwards of 60$), lack a great deal of useful configuration options, and seem to need to be restarted quite often. (Linksys comes to mind). I've been on the market for a good router lately, and slowly collecting a set of requirements I feel good routers should meet. Maximum number of TCP/IP connections. - This isn't something I see any routers advertise, but in terms of supporting torrent applications, I've been screwed by routers that support less than 20 here. From what I understand a fairly standard number is 200, but there are not so expensive routers that support thousands. Router configuration menu - Most have standard menu's that let you set up basic things like your wireless network encryption settings, uPnP, and maybe even DMZ (demilitarized zones). An absolute requirement for me, however, are routers with good enough firmware to support: Explicit Port forwarding Assigning static local ips to specific mac addresses, or at least Port forwarding by MAC address Port, IP and MAC filtering Dynamic DNS service for home users who want to set up a server but have a dynamic IP Traffic shaping (ideally) - giving priority to packets from certain machines or over certain ports. Strong wireless signal - If getting a reliable signal requires me to be so close to the router that I can connect an Ethernet cable, it's not good enough. As many Ethernet ports as possible. - Because I want to be able to switch from console gaming to PC gaming without visiting my router. So far, the best thing I've stumbled upon (in the bargain bin at staples) was a 20$ retail plus router. It was meant to be the cheapest alternative until I could find something better to purchase online, but I was actually blown away by the firmware capabilities. It supports defining reserved bandwidth for certain network traffic, dynamic DNS, reserving local IPs for specific MAC addresses, etc. At 2 am when my roommate is killing our Internet with their torrents, I can limit their bandwidth without outright blacklisting them. I have, however, met serious limitations when it comes to network traffic between local machines. It claims a 300Mbps connection, but I have trouble streaming videos from my PC to my console or other laptops wirelessly. It has a meltdown and needs to be reset once in a while (no more than a couple times a month), and it's got a 200 connection limit. There 4 Ethernet ports in the back but I'm pretty sure the first doesn't work. So some great answers to this question would be: Any metrics you use to compare routers, and requirements you have for new candidates. The best routers you've found for supporting home servers, file management systems, high volume torrent traffic, good price/feature ratio, etc. Good configuration advice (aside from 'use Ethernet whenever possible') Thanks for your feedback and experiences!

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • How should I set up my Hyper-V server and network topology?

    - by Daniel Waechter
    This is my first time setting up either Hyper-V or Windows 2008, so please bear with me. I am setting up a pretty decent server running Windows Server 2008 R2 to be a remote (colocated) Hyper-V host. It will be hosting Linux and Windows VMs, initially for developers to use but eventually also to do some web hosting and other tasks. Currently I have two VMs, one Windows and one Ubuntu Linux, running pretty well, and I plan to clone them for future use. Right now I'm considering the best ways to configure developer and administrator access to the server once it is moved into the colocation facility, and I'm seeking advice on that. My thought is to set up a VPN for access to certain features of the VMs on the server, but I have a few different options for going about this: Connect the server to an existing hardware firewall (an old-ish Netscreen 5-GT) that can create a VPN and map external IPs to the VMs, which will have their own IPs exposed through the virtual interface. One problem with this choice is that I'm the only one trained on the Netscreen, and its interface is a bit baroque, so others may have difficulty maintaining it. Advantage is that I already know how to do it, and I know it will do what I need. Connect the server directly to the network and configure the Windows 2008 firewall to restrict access to the VMs and set up a VPN. I haven't done this before, so it will have a learning curve, but I'm willing to learn if this option is better long-term than the Netscreen. Another advantage is that I won't have to train anyone on the Netscreen interface. Still, I'm not certain if the capabilities of the Windows software firewall as far as creating VPNs, setting up rules for external access to certain ports on the IPs of Hyper-V servers, etc. Will it be sufficient for my needs and easy enough to set up / maintain? Anything else? What are the limitations of my approaches? What are the best practices / what has worked well for you? Remember that I need to set up developer access as well as consumer access to some services. Is a VPN even the right choice?

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • I want to virtualize my workstation (Tier 1), Looking for Bare Metal Hypervisor for consumer grade components

    - by Chase Florell
    I find myself in this similar bind at least once a year. The bind whereby I'm either upgrading a motherboard, or an OS hard drive. It drives me crazy to have to reinstall Windows, Visual Studio, all my addins, reconfigure my settings etc... every single time. I have a layout and I like and I want to stick with it. My question is... Is there a Bare Metal Hypervisor on the market that will enable me to virtualize my consumer grade workstation? I really want to avoid Host/Client virtualization. Bare Metal is definitely a better way to go for my needs. Is this a good approach, or am I going to suffer some other undesirable side effects by doing this? Clarification My machine has very limited purposes. My primary use is Visual Studio 2010 Professional where I develop ASP.NET MVC Web Applications. The second piece of software that I use (that's system intensive) is Photoshop CS3. Beyond that, my applications are limited to Outlook, Internet Explorer, Firefox, Opera, Chrome, LinqPad, and various other (small) apps. Beyond this, I'm considering working on a node.js project and might run ubuntu on the same hypervisor if possible. System Specs: Gigabyte Motherboard Intel i7 920 12 GB Ram basic 500GB 7200RPM HDD for OS 4 VelociRaptors in Raid 1/0 for build disk Dual GTS250 (512MB) Graphics cards (non SLI) for quad monitors On a side note I also wouldn't be opposed to an alternative suggestion if the limitations are too great. I could install the ESXi (or Zen Server) on my box, and build a separate "thin client" to RDP into the virtual machine. It appears as though RDP supports dual monitors. Edit (Dec 9, 2011) It's been nearly a year since I first asked this question. Since then, there have been a lot of great strides in Hypervisor technology... AND MokaFive is now released for corporate use. I'd love to dig into this question a little more and find out if there is a solid BareMetal Hypervisor for workstations running consumer grade components (IE: not Dell, HP, Lenovo, Etc).

    Read the article

  • Single domain name potentially resolving to multiple servers

    - by Jace
    first time here at Server Fault, and I apologize in advance that this domain stuff is not really my strength. Any and all suggestions are much appreciated. I am completely lost and incredibly tired! I've inherited an incredibly convoluted system from my predecessor, and I'm trying to find a way to solve it - or I need to be told that it just isn't possible. I've got an old site on ServerA (some kind of Linux distribution), with the domain SomeDomain.com There is a new site sitting on ServerB (Ubuntu), with the intention of having SomeDomain.com to serve it in the future (it is replacing the old site) ServerA also has a web app that is currently in use by other departments within the company (accessible at SomeDomain.com/web-app/) The goal: To have SomeDomain.com and all extensions of this domain name (sub-domains, URL's etc.) serve the new site on ServerB. BUT, the URL SomeDomain.com/web-app/ must serve the Web App on ServerA. The Catch: The ServerA is a shared server with a hosting company with VERY limiting restrictions in place - I cannot adjust DNS settings (apart from Name servers - but cannot set A records or anything, I have full access to ServerB to do as I wish). Therefore the web-app MUST be served from SomeDomain.com/web-app/ and not from a sub-domain or anything. These limitations make migrating the web-app from Server A to Server B rather undesirable, AND this web-app will be replaced in the near future, so it isn't worth the effort right now. Therefore, ultimately I will want 1 domain name to resolve to Server B's IP address most of the time, but in the event that the URL is SomeDomain.com/web-app/, it should resolve to Server A's IP. Note: The domain names don't, technically, have to resolve to one IP or another - but ultimately the URL's must stay consistent Some things I have tried: I've looked into mod_rewrite and .htaccess to try and achieve this effect, but it doesn't look like it's going to work for me - but I may have done it wrong (On Server B, I just checked if the request URI was /web-app/ and tried to serve the /web-app/ folder on Server A) I do have the ability to modify the name servers on both servers I am not able to make a sub domain on Server A that points back to Server A (I assume because the hosting company's servers use the URL to determine what site the serve). I figured this could be good as I'd could set an A record on Server B to point to the web app on Server A - but alas, Server A requires SomeDomain.com. If there is any more information I can give, please let me know. I need a nudge in the right direction, ideas or a solution.

    Read the article

  • What router hardware or software should be used when multiple public IPs are routed into the same LAN?

    - by lcbrevard
    I am looking for recommendations to replace a set of consumer grade (Linksys, Netgear, Belkin) routers with something that can handle more traffic while routing more than one static public IP into the same LAN address space. We have a block of static public IPs, 5 usable, with Comcast Business. Currently four of them are in use for: General office access Web server Mail and DNS servers Download and backup web server for separate business All systems (a mixture of physical and virtual) are in the same LAN address space (10.x.y.0/24) to enable easy access between them inside the office. There are 30 or more systems in use depending on which virtual machines are currently active. We have a mixture of Windows, Linux, FreeBSD, and Solaris. Currently a separate consumer grade router is used for each of the four static addresses, with its WAN address set to the specific static address and a different gateway address for each: uses 10.x.y.1 - various ports are forwarded to various LAN IPs on systems with gateway 10.x.y.1 uses 10.x.y.254 - port 80 is forwarded to a server with gateway 10.x.y.254 uses 10.x.y.253 - ports for mail and dns are forwarded to a server with gateway 10.x.y.253 uses 10.x.y.252 - ports as needed are forwarded to server with gateway 10.x.y.252 Only router 1. is allowed to serve DHCP and address reservation based on the MAC is used for most of the internal "server" IP addresses so they are at fixed values. [Some are set static due to limitations in the address reservation capabilities of router 1.] And, yes, this really does work! But... I am looking for: better DHCP with more capable address reservation higher capacity so I don't have to periodically power cycle the routers One obvious improvement would be to have a real DHCP server and not use a consumer grade router for that purpose. I am torn between buying a "professional" router such as Cisco or Juniper or Sonic Wall verus learning to configure some spare hardware to perform this function. The price goes up extremely rapidly with capabilities for commercial routers! Worse, some routers require licensing based on the number of clients - a disaster in our environment with so many virtual machines. Sorry for such a long posting but I am getting tired of having to power cycle routers and deal with shifting IP addresses afterwards!

    Read the article

  • Ask How-To Geek: Learning the Office Ribbon, Booting to USB with an Old BIOS, and Snapping Windows

    - by Jason Fitzpatrick
    You’ve got questions and we’ve got answers. Today we highlight how to master the new Office interface, USB boot a computer with outdated BIOS, and snap windows to preset locations. Learning the New Office Ribbon Dear How-To Geek, I feel silly asking this (in light of how long the new Office interface has been out) but my company finally got around to upgrading from Windows XP and Office 2000 so the new interface it totally new to me. Can you recommend any resources for quickly learning the Office ribbon and the new changes? I feel completely lost after two decades of the old Office interface. Help! Sincerely, Where the Hell is Everything? Dear Where the Hell, We think most people were with you at some point in the last few years. “Where the hell is…” could possibly be the slogan for the new ribbon interface. You could browse through some of the dry tutorials online or even get a weighty book on the topic but the best way to learn something new is to get hands on. Ribbon Hero turns learning the new Office features and ribbon layout into a game. It’s no vigorous round of Team Fortress mind you, but it’s significantly more fun than reading a training document. Check out how to install and configure Ribbon Hero here. You’ll be teaching your coworkers new tricks in no time. Boot via USB with an Old BIOS Dear How-To Geek, I’m trying to repurpose some old computers by updating them with lightweight Linux distros but the BIOS on most of the machines is ancient and creaky. How ancient? It doesn’t even support booting from a USB device! I have a large flash drive that I’ve turned into a master installation tool for jobs like this but I can’t use it. The computers in question have USB ports; they just aren’t recognized during the boot process. What can I do? USB Bootin’ in Boise Dear USB Bootin’, It’s great you’re working to breathe life into old hardware! You’ve run into one of the limitations of older BIOSes, USB was around but nobody was thinking about booting off of it. Fortunately if you have a computer old enough to have that kind of BIOS it’s likely to also has a floppy drive or a CDROM drive. While you could make a bootable CDROM for your application we understand that you want to keep using the master USB installer you’ve made. In light of that we recommend PLoP Boot Manager. Think of it like a boot manager for your boot manager. Using it you can create a bootable floppy or CDROM that will enable USB booting of your master USB drive. Make a CD and a floppy version and you’ll have everything in your toolkit you need for future computer refurbishing projects. Read up on creating bootable media with PLoP Boot Manager here. Snapping Windows to Preset Coordinates Dear How-To Geek, Once upon a time I had a company laptop that came with a little utility that snapped windows to preset areas of the screen. This was long before the snap-to-side features in Windows 7. You could essentially configure your screen into a grid pattern of your choosing and then windows would neatly snap into those grids. I have no idea what it was called or if was anymore than a gimmick from the computer manufacturer, but I’d really like to have it on my new computer! Bend and Snap in San Francisco, Dear Bend and Snap, If we had to guess, we’d guess your company must have had a set of laptops from Acer as the program you’re describing sounds exactly like Acer GridVista. Fortunately for you the application was extremely popular and Acer released it independently of their hardware. If, by chance, you’ve since upgraded to a multiple monitor setup the app even supports multiple monitors—many of the configurations are handy for arranging IM windows and other auxiliary communication tools. Check out our guide to installing and configuring Acer GridVista here for more information. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Download the New Year in Japan Windows 7 Theme from Microsoft Once More Unto the Breach – Facebook Apps Can Now Access Your Address and Phone Number Dial Zero Speeds You Through Annoying Customer Service Menus Complete Dropquest 2011 and Receive Free Dropbox Storage Desktop Computer versus Laptop Wallpaper The Kids Have No Idea What Old Tech Is [Video]

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • How to Access a Windows Desktop From Your Tablet or Phone

    - by Chris Hoffman
    iPads and Android tablets can’t run Windows apps locally, but they can access a Windows desktops remotely — even with a physical keyboard. In a pinch, the same tricks can be used to access a Windows desktop from a smartphone. Microsoft recently launched their own official Remote Desktop app for iOS and Android devices. Microsoft’s official apps are primarily useful for businesses — if you’re a typical home user, you’ll want to use a different remote desktop solution. Microsoft’s Remote Desktop App Microsoft now offers official Remote Desktop apps for iPad and iPhone as well as Android tablets and smartphones. The apps use Microsoft’s RDP protocol to connect to remote Windows systems. They’re essentially just new clients for the Remote Desktop feature that has been included in Windows for more than a decade. There are big problems with these apps if you’re an average home user. Microsoft’s Remote Desktop server is not available on standard or Home versions of Windows, only Professional and Enterprise editions. If you do have the appropriate edition of Windows, you’ll have to set up port-forwarding and a dynamic DNS service if you want to access your Windows desktop from outside your local network. You could also set up a VPN — either way you’ll need to do some footwork. This app is a gift to businesses who are already using Remote Desktop and enthusiasts who have the more expensive versions of Windows and don’t mind the configuration process. To set this up, follow our guide to setting up Remote Desktop for Internet access and connect using the Remote Desktop app instead of traditional Remote Desktop clients. TeamViewer If you have the standard edition of Windows or you just don’t want to mess around with port-forwarding and dynamic DNS configuration, you’ll want to skip Remote Desktop and use something else. We like TeamViewer for this. Just as it’s a great way to remotely troubleshoot your relatives’ computers, it’s also a great way to remotely access your own computer. It doesn’t have the same limitations Microsoft’s Remote Desktop system has — it’s completely free for personal use, runs on any edition of Windows, and is easy to set up. There’s no messing around with port-forwarding or dynamic DNS configuration. To get started, just download and run the TeamViewer program on your computer. You can get started with it immediately, but you’ll want to set up unattended access to connect remotely without using the codes displayed on your screen. To connect, just install the TeamViewer mobile app and log in with the details the TeamViewer window displays. TeamViewer also offers software that runs on Mac and Linux, so you can remote-control other types of computers from your tablet. Other Options Microsoft’s Remote Desktop app and TeamViewer aren’t the only options, of course. There are a variety of different apps and services built for this. Splashtop is another fairly popular remote desktop solution that some people report as being faster. Unfortunately, it’s not entirely free — the iPad and iPhone app costs $20 at regular price. To use it over the Internet, you’ll have to purchase an additional “Anywhere Access Pack.” If you’re frustrated with TeamViewer’s speed and you don’t mind spending money, you may want to try Splashtop instead. As always, you could use any VNC server along with a VNC client app. VNC is the do-it-yourself solution — it’s an open protocol. Unlike Microsoft’s RDP protocol, you can install a VNC server of your own, configure it how you like, and use any mobile VNC client app. This is more flexible because you can install a VNC server on any edition of Windows or even non-Windows operating systems, but it otherwise has all the same issues — you have to worry about port-forwarding, setting up dynamic DNS, and securing your VNC server. Keep an eye on Chrome Remote Desktop. Chrome already offers a built-in remote desktop feature that allows you to remotely control your PC from another Windows, Mac, Linux, or Chrome OS device. Google is rumored to be building an Android app for Chrome Remote Desktop, which would allow you to easily access a computer running Chrome from Android tablets. Google’s solution is much more user-friendly for average people than Microsoft’s Remote Desktop solution, which is clearly geared towards businesses. Chrome Remote Desktop just requires signing in with a Google account. Remote desktop solutions like Microsoft’s Remote Desktop app and TeamViewer are also available for Windows tablets. On Windows RT devices like the Surface RT and Surface 2, they allow you to use the full Windows desktop that’s unavailable on your tablet.     

    Read the article

  • Windows Phone 7 development: first impressions

    - by DigiMortal
    After hard week in work I got some free time to play with Windows Phone 7 CTP developer tools. Although my first test application is still unfinished I think it is good moment to share my first experiences to you. In this posting I will give you quick overview of Windows Phone 7 developer tools from developer perspective. If you are familiar with Visual Studio 2010 then you will feel comfortable because Windows Phone 7 CTP developer tools base on Visual Studio 2010 Express. Project templates There are five project templates available. Three of them are based on Silverlight and two on XNA Game Studio: Windows Phone Application (Silverlight) Windows Phone List Application (Silverlight) Windows Phone Class Library (Silverlight) Windows Phone Game (XNA Game Studio) Windows Phone Game Library (XNA Game Studio) Currently I am writing to test applications. One of them is based on Windows Phone Application and the other on Windows Phone List Application project template. After creating these projects you see the following views in Visual Studio. Windows Phone Application. Click on image to enlarge. Windows Phone List Application. Click on image to enlarge.  I suggest you to use some of these templates to get started more easily. Windows Phone 7 emulator You can run your Windows Phone 7 applications on Windows Phone 7 emulator that comes with developer tools CTP. If you run your application then emulator is started automatically and you can try out how your application works in phone-like emulator. You can see screenshot of emulator on right. Currently there is opened Windows Phone List Application as it is created by default. Click on image to enlarge it. Emulator is a little bit slow and uncomfortable but it works pretty well. This far I have caused only couple of crashes during my experiments. In these cases emulator works but Visual Studio gets stuck because it cannot communicate with emulator. One important note. Emulator is based on virtual machine although you can see only phone screen and options toolbar. If you want to run emulator you must close all virtual machines running on your machine and run Visual Studio 2010 as administrator. Once you run emulator you can keep it open because you can stop your application in Visual Studio, modify, compile and re-deploy it without restarting emulator. Designing user interfaces You can design user interface of your application in Visual Studio. When you open XAML-files it is displayed in window with two panels. Left panel shows you device screen and works as visual design environment while right panel shows you XAML mark-up and let’s you modify XML if you need it. As it is one of my very first Silverlight applications I felt more comfortable with XAML editor because property names in property boxes of visual designer confused me a little bit. Designer panel is not very good because it is visually hard to follow. It has black background that makes dark borders of controls very hard to see. If you have monitor with very high contrast then it is may be not a real problem. I have usual monitor and I have problem. :) Putting controls on design surface, dragging and resizing them is also pretty painful. Some controls are drawn correctly but for some controls you have to set width and height in XML so they can be resized. After some practicing it is not so annoying anymore. On the right you can see toolbox with some controllers. This is all you get out of the box. But it is sufficient to get started. After getting some experiences you can create your own controls or use existing ones from other vendors or developers. If it is your first time to do stuff with Silverlight then keep Google open – you need it hard. After getting over the first shock you get the point very quickly and start developing at normal speed. :) Writing source code Writing source code is the most familiar part of this action. Good old Visual Studio code editor with all nice features it has. But here you get also some surprises: The anatomy of Silverlight controls is a little bit different than the one of user controls in web and forms projects. Windows Phone 7 doesn’t run on full version of Windows (I bet it is some version of Windows CE or something like this) then there is less system classes you can use. Some familiar classes have less methods that in full version of .NET Framework and in these cases you have to write all the code by yourself or find libraries or source code from somewhere. These problems are really not so much problems than limitations and you get easily over them. Conclusion Windows Phone 7 CTP developer tools help you do a lot of things on Windows Phone 7. Although I expected better performance from tools I think that current performance is not a problem. This far my first test project is going very well and Google has answer for almost every question. Windows Phone 7 is mobile device and therefore it has less hardware resources than desktop computers. This is why toolset is so limited. The more you need memory the more slower is device and as you may guess it needs the more battery. If you are writing apps for mobile devices then make your best to get your application use as few resources as possible and act as fast as possible.

    Read the article

  • Simple Excel Export with EPPlus

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/10/30/simple-excel-export-with-epplus.aspxAnyone I’ve ever met who works with an application that sits in front of a lot of data loves it when they can get that data exported to an Excel file for them to mess around with offline. As both developer and end user of a little website project that I’ve been working on, I found myself wanting to be able to get a bunch of the data that the application was collecting into an Excel file. The great thing about being both an end user and a developer on a project is that you can build the features that you really want! While putting this feature together I came across the fantastic EPPlus library. This library is certainly very well known and popular, but I was so impressed with it that I thought it was worth a quick blog post. This library is extremely powerful; it lets you create and manipulate Excel 2007/2010 spreadsheets in .NET code with a high degree of flexibility. My only gripe with the project is that they are not touting how insanely easy it is to build a basic Excel workbook from a simple data source. If I were running this project the approach I’m about to demonstrate in this post would be front and center on the landing page for the project because it shows how easy it really is to get started and serves as a good way to ease yourself in to some of the more advanced features. The website in question uses RavenDB, which means that we’re dealing with POCOs to model the data throughout all layers of the application. I love working like this so when it came time to figure out how to export some of this data to an Excel spreadsheet I wanted to find a way to take an IEnumerable<T> and just have it dumped to Excel with each item in the collection being modeled as a single row in the Excel worksheet. Consider the following class: public class Employee { public int Id { get; set; } public string Name { get; set; } public decimal HourlyRate { get; set; } public DateTime HireDate { get; set; } } Now let’s say we have a collection of these represented as an IEnumerable<Employee> and we want to be able to output it to an Excel file for offline querying/manipulation. As it turns out, this is dead simple to do with EPPlus. Have a look: public void ExportToExcel(IEnumerable<Employee> employees, FileInfo targetFile) { using (var excelFile = new ExcelPackage(targetFile)) { var worksheet = excelFile.Workbook.Worksheets.Add("Sheet1"); worksheet.Cells["A1"].LoadFromCollection(Collection: employees, PrintHeaders: true); excelFile.Save(); } } That’s it. Let’s break down what’s going on here: Create a ExcelPackage to model the workbook (Excel file). Note that the ‘targetFile’ value here is a FileInfo object representing the location on disk where I want the file to be saved. Create a worksheet within the workbook. Get a reference to the top-leftmost cell (addressed as A1) and invoke the ‘LoadFromCollection’ method, passing it our collection of Employee objects. Behind the scenes this is reflecting over the properties of the type provided and pulling out any public members to become columns in the resulting Excel output. The ‘PrintHeaders’ parameter tells EPPlus to grab the name of the property and put it in the first row. Save the Excel file All of the heavy lifting here is being done by the ‘LoadFromCollection’ method, and that’s a good thing. Now, this was really easy to do, but it has some limitations. Using this approach you get a very plain, un-styled Excel worksheet. The column widths are all set to the default. The number format for all cells is ‘General’ (which proves particularly interesting if you have a DateTime property in your data source). I’m a “no frills” guy, so I wasn’t bothered at all by trading off simplicity for style and formatting. That said, EPPlus has tons of samples that you can download that illustrate how to apply styles and formatting to cells and a ton of other advanced features that are way beyond the scope of this post.

    Read the article

  • An Alphabet of Eponymous Aphorisms, Programming Paradigms, Software Sayings, Annoying Alliteration

    - by Brian Schroer
    Malcolm Anderson blogged about “Einstein’s Razor” yesterday, which reminded me of my favorite software development “law”, the name of which I can never remember. It took much Wikipedia-ing to find it (Hofstadter’s Law – see below), but along the way I compiled the following list: Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. Brook’s Law: Adding manpower to a late software project makes it later. Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic. Law of Demeter: Each unit should only talk to its friends; don't talk to strangers. Einstein’s Razor: “Make things as simple as possible, but not simpler” is the popular paraphrase, but what he actually said was “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”, an overly complicated quote which is an obvious violation of Einstein’s Razor. (You can tell by looking at a picture of Einstein that the dude was hardly an expert on razors or other grooming apparati.) Finagle's Law of Dynamic Negatives: Anything that can go wrong, will—at the worst possible moment. - O'Toole's Corollary: The perversity of the Universe tends towards a maximum. Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. (Morris’s Corollary: “…including Common Lisp”) Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. Issawi’s Omelet Analogy: One cannot make an omelet without breaking eggs - but it is amazing how many eggs one can break without making a decent omelet. Jackson’s Rules of Optimization: Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet. Kaner’s Caveat: A program which perfectly meets a lousy specification is a lousy program. Liskov Substitution Principle (paraphrased): Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it Mason’s Maxim: Since human beings themselves are not fully debugged yet, there will be bugs in your code no matter what you do. Nils-Peter Nelson’s Nil I/O Rule: The fastest I/O is no I/O.    Occam's Razor: The simplest explanation is usually the correct one. Parkinson’s Law: Work expands so as to fill the time available for its completion. Quentin Tarantino’s Pie Principle: “…you want to go home have a drink and go and eat pie and talk about it.” (OK, he was talking about movies, not software, but I couldn’t find a “Q” quote about software. And wouldn’t it be cool to write a program so great that the users want to eat pie and talk about it?) Raymond’s Rule: Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter.  Sowa's Law of Standards: Whenever a major organization develops a new system as an official standard for X, the primary result is the widespread adoption of some simpler system as a de facto standard for X. Turing’s Tenet: We shall do a much better programming job, provided we approach the task with a full appreciation of its tremendous difficulty, provided that we respect the intrinsic limitations of the human mind and approach the task as very humble programmers.  Udi Dahan’s Race Condition Rule: If you think you have a race condition, you don’t understand the domain well enough. These rules didn’t exist in the age of paper, there is no reason for them to exist in the age of computers. When you have race conditions, go back to the business and find out actual rules. Van Vleck’s Kvetching: We know about as much about software quality problems as they knew about the Black Plague in the 1600s. We've seen the victims' agonies and helped burn the corpses. We don't know what causes it; we don't really know if there is only one disease. We just suffer -- and keep pouring our sewage into our water supply. Wheeler’s Law: All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection. Wheeler also said “Compatibility means deliberately repeating other people's mistakes.”. The Wrong Road Rule of Mr. X (anonymous): No matter how far down the wrong road you've gone, turn back. Yourdon’s Rule of Two Feet: If you think your management doesn't know what it's doing or that your organisation turns out low-quality software crap that embarrasses you, then leave. Zawinski's Law of Software Envelopment: Every program attempts to expand until it can read mail. Zawinski is also responsible for “Some people, when confronted with a problem, think 'I know, I'll use regular expressions.' Now they have two problems.” He once commented about X Windows widget toolkits: “Using these toolkits is like trying to make a bookshelf out of mashed potatoes.”

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >