Search Results

Search found 41235 results on 1650 pages for 'source control bindings'.

Page 554/1650 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • Which method of SQL Server 2005 or 2008 Replication is best for ease of field changes?

    - by Rick
    We need 15 minute warm updates from one SQL Server to another. Log Shipping looks good and appears easy to setup. We are also looking into Transactional Replication. The data only needs to copy one way. We have two main requirements: 1) The destination database needs to be a max 15 minute old copy of the source. It needs to re-try and get up-to-date if a network cable is unplugged for a while. 2) We would really like table (fields added or modified) changes in the source as easy as possible. Thanks in advance for all suggestions.

    Read the article

  • ODBC in SSIS 2012

    - by jamiet
    In August 2011 the SQL Server client team published a blog post entitled Microsoft is Aligning with ODBC for Native Relational Data Access in which they basically said "OLE DB is the past, ODBC is the future. Deal with it.". From that blog post:We encourage you to adopt ODBC in the development of your new and future versions of your application. You don’t need to change your existing applications using OLE DB, as they will continue to be supported on Denali throughout its lifecycle. While this gives you a large window of opportunity for changing your applications before the deprecation goes into effect, you may want to consider migrating those applications to ODBC as a part of your future roadmap.I recently undertook a project using SSIS2012 and heeded that advice by opting to use ODBC Connection Managers rather than OLE DB Connection Managers. Unfortunately my finding was that the ODBC Connection Manager is not yet ready for primetime use in SSIS 2012. The main issue I found was that you can't populate an Object variable with a recordset when using an Execute SQL Task connecting to an ODBC data source; any attempt to do so will result in an error:"Disconnected recordsets are not available from ODBC connections." I have filed a bug on Connect at ODBC Connection Manager does not have same funcitonality as OLE DB. For this reason I strongly recommend that you don't make the move to ODBC Connection Managers in SSIS just yet - best to wait for the next version of SSIS before doing that.I found another couple of issues with the ODBC Connection Manager that are worth keeping in mind:It doesn't recognise System Data Source Names (DSNs), only User DSNs (bug filed at ODBC System DSNs are not available in the ODBC Connection Manager)  UPDATE: According to a comment on that Connect item this may only be a problem on 64bit.In the OLE DB Connection Manager parameter ordinals are 0-based, in the ODBC Connection Manager they are 1-based (oh I just can't wait for the upgrade mess that ensues from this one!!!)You have been warned!@jamiet

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • Gentoo box can't cURL or ping after restarting net.eth1

    - by Curlybraces
    Hi all, the following is completely baffling me. We currently have a gentoo box which acts as our LAMP, DNS, DHCP server. This is assigned a static IP on the network. This server is connected directly to the internet via a BT BusinessHub Router. The server is also connected to a patch panel/switch port which connects the remaining office (around 10 PC's) to the server. Everything has been plain sailing until the other day when the server was restarted. For some reason now only portions of network accessibility is available depending on which ethernet device was last restarted. Restarting net.eth0 allows the office server to cURL, ping, etc but stops all networked PC's from accessing the internet. Then restarting net.eth1 restores all internet to the network but stops the server from curling, pinging, etc again. However, even when the server can't ping, curl, etc, I can still remote SSH and remote MySQL connect from the server command line to other external servers that we own. Here's my route map (router is 192.168.1.254): Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 eth1 Here's my /etc/conf.d/net: iface_eth0="192.168.1.99 broadcast 192.168.1.255 netmask 255.255.255.0" iface_eth1="dhcp" None of the above have ever been changed however. Things have just ceased to operate correctly, which makes me think it's a freshly added Iptables rule. Here's the Iptables Filter table: Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- ##.##.##.## anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:2199 ACCEPT tcp -- anywhere anywhere tcp dpt:3199 ACCEPT tcp -- ##.###.###.## anywhere tcp dpt:http ACCEPT tcp -- ###.###.##.## anywhere tcp dpt:2199 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.##.## anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:3128 ACCEPT udp -- ##.###.###.### anywhere udp dpt:3128 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere ##.###.###.## DROP all -- anywhere ##.###.###.## ACCEPT all -- anywhere anywhere state NEW,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp spt:2199 ACCEPT udp -- anywhere anywhere udp spt:4817 ACCEPT udp -- anywhere anywhere udp spt:4819 ACCEPT udp -- anywhere anywhere udp spt:3199 Help gratefully appreciated.

    Read the article

  • Alternatives to Citrix GoToAssist ?

    - by Evan Carroll
    Citrix GoToAssist is a really nifty little web application for customer support that allows you to take control of someones OSX, or Windows machine. Essentially, it works likes this: You log in to your management console You get a code You give them a code, and a website (fastsupport.com) They go there and enter in the code They accept the browser applet which installs a program on their computer You have control of their desktop You can see their desktop, configure applications, etc. They can also see when you disconnect. It is really rather nifty, but it doesn't support Linux and it is rather expensive (660$ a year). Does anyone know of any alternatives to this? I'm looking for a solution as simple on the user as this one, that doesn't require firewall configuration or setting up ssh/vnc/rdesktop etc.

    Read the article

  • Setting up an SVN repository on a windows server to work with XCode.

    - by Tejaswi Yerukalapudi
    Hi, I need to setup an SVN repository for an iPhone / iPad app that I'm working on. It needs to be setup on a Windows server 2003 instance. We run VSS on it, but it doesn't really work all that well with XCode, so I'd like to migrate ASAP. The problem is, I'm a new graduate and this is the first time I'll be using a souce control software, so any tutorials on how to set one up, configure it to work correctly with Macs using XCode are greatly appreciated! Though it's not a priority, I'd also like to try to get my company to ditch VSS for SVN (Been reading this article yesterday - http://www.codinghorror.com/blog/2006/08/source-control-anything-but-sourcesafe.html). How hard is it to migrate existing builds in VSS into something like SVN? Thanks, Teja

    Read the article

  • Software to replicate one computers display onto many other displays

    - by Joe Taylor
    We have a classroom setup with one teachers pc at the front. I am looking for some software, preferably open source although this is not a deal breaker, to force all displays in the room to replicate the teachers display. Also if this software could be locked so the students could not exit this software while it was running. Does anyone know of any software that could perform this task? I have googled around for a solution but haven't found anything suitable as yet. It would be running on Windows 7 Flavours of the software I have found are: Lanschool and NetOp. Open source alternatives would be better.

    Read the article

  • CodePlex Daily Summary for Sunday, August 24, 2014

    CodePlex Daily Summary for Sunday, August 24, 2014Popular ReleasesCS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.31.0: Fixed problem with menu item 'Plugins->CS-SCript->Debug' invoking 'Run' instead of 'Debug'.Media Companion: Media Companion MC3.599b: New:* MC - Remember last monitor Media Companion Ran on, and re-open there if available. * MC - If notepad++ installed, use for opening nfo XML files. * Movie - Fix: Fanart & Poster searching using 'Google Search' button opened multiple browser tabs, one per search word. * Movie - Allow Re-scrape with XBMC TMDB Scraper, if IMDB Id is present. * TV - added option to save Season Poster into season folder as folder.jpg Fixed:* Movie - Table view error if a row header was selected. * Movie - Tab...ASP.NET Identity 2.0 Azure Table Storage: Release 1.2.5.2: Optimizing the login and email index queries. Optimizing IsInRoleAsync operation. 100% unit test pass and 100% code coverage. Full sample source available as a download or in the source branch /Releases/1.2.x.x/sample. Sample code doesn't require an Azure account but does require the Azure SDK with the Storage Emulator at a minimum for running locally. Full suite of unit tests against this assembly at 100% pass rate against the Azure Local Emulator and against a live Azure Storage acc...BugNET Issue Tracker: BugNET 1.6.327: This release contains fixes and enhancements from the previous 1.6.315 release. Please read our release notes for BugNET 1.6.327: http://blog.bugnetproject.com/2014/08/23/bugnet-1-6-327-and-bugnet-pro-1-5-99-released/DIII Save Editor: ROS Alpha 1.2.14.100: initial Ros alpha release please report all bugsSEToolbox: SEToolbox 01.044.014 Release 2: Fixed Ship name not saving. Fixed broken cubes view Bug. Fixed cast VRage.MyFixedPoint error when opening games with Meteors. Added checkbox when Importing 3d model to Export ship, to fill it as solid.CS-Script Source: Release v3.8.5: Fixed problem with the warnings getting hidden in case of the successful compilation cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationMagick.NET: Magick.NET 7.0.0.0002: Magick.NET linked with ImageMagick 7babelua: 1.6.7.0: V1.6.7.0 - 2014.8.21New feature: add a file search window ( ctrl+1 or ALT+L ), like The file search in VC Assistant; Stability improvement: performance improvement when BabeLua load/unload; performance improvement when debugger load lua files;XboxConsole: XboxConsole 2.0.40820.0: Updated release with added support for: - August XDK - Party API (See updated documentation) Supports the following XDK versions: April 2014 May 2014 June 2014 (all QFEs) July 2014 (all QFEs) August 2014Open NFe: RDI Open NFe 3.0 (alpha): Atualização para o layout 3.10 da NFe.AssaultCube Reloaded: Release 2.6.1: Windows XP USERS must download the patch in addition to the Windows package. Some changes couldn't make it to 2.6, and a recode was started before 2.6.1 could be released. However, the version 2.6.1 is used to represent the first beta release of 2.7. Changelog: Recoded on AC 1.2 as the base version (likely less crashes) Class manager Simpler killfeed, removed kill messages Hide KILL indicator in classic, update at 4 second intervals Disable spawn protection upon firing the first sh...SysLog Server: SysLogServer: This is not a commersial product, use on your own responsibilityMolGridCal & MolCal: MolGridCal tutorial v1.1: Update the contents for grid computing virtual screening.MSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksCMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.New ProjectsDnn Picasa Image Gallery: The DnnC Picasa Image Gallery module allow you to display your Picasa web albums and there photos within your Dnn website.Hot Mess: Hot Mess game software and arduino firmware.Kinect HD Face Sample in unmanaged C++: This is a C++ unmanaged project which is based on the Kinect For Windows v2 SDK sample: FaceBasics. Instead of using the Face source, it utilizes the HDFaceModbus Master: A MODBUS Master application for Windows supporting all MODBUS function codes, a plugin interface and scripting interface.Path Finding on Wireless Sensor Network: Path Finding on Wireless Sensor Networkperilla: enhanced c++ templateXiamiSigLite-Silent: ???????,??Win7??。

    Read the article

  • Apt Stalls When Using HTTP Sources

    - by UltraNurd
    I was getting some to me inexplicable behavior from apt-get/aptitude on an admittedly crusty old webserver. While it was otherwise running fine, as soon as I tried a package upgrade, after a downloading a few updates it would stall completely, then my SSH session hung (and I was unable to reconnect), thus requiring a hard restart. First, I switched to a different package source in /etc/apt/sources.list, but still got the same behavior. At this point I was assuming the NIC was dying in some weird way... but as soon as I changed the package source to use FTP instead of HTTP, everything worked fine, and I was able to upgrade. For now I'm not too concerned since I have an easy work around, but it implies that there's something very weird with my network setup, since it seems to be protocol (or port?) specific. I didn't think any of my NAT setup would affect outbound traffic, but I could be crazy. Any ideas what I should try to look for?

    Read the article

  • Skin Object Tokens for DotNetNuke 5 - 8 Videos

    In this tutorial we demonstrate how to use Skin Object Tokens in DotNetNuke v5 and above. Skin Object tokens are a new skinning method introduced in DotNetNuke 5 for adding tokens into a DotNetNuke skin. A Skin Object Token is a web user control, it covers skin elements such as the logo, menu, search, login links, date, copyright, languages, links, banners, privacy, terms of use etc. This new Object token method has been introduced into DotNetNuke with the idea of making it simpler to add a skin object into a DotNetNuke skin. The videos contain: Video 1 - Introduction to HTML Object Token Skinning Video 2 - Basic Styling of a Skin and Creating Multiple Content Panes Video 3 - Styling, Control Panel, Login and Register Skin Object Tokens Video 4 - Packaging, Installing, Testing and Viewing the ASCX Version of the Skin Video 5 - Viewing the Attributes for Skin Object Tokens, Logo Token, Search Token Video 6 - Breadcrumb Token, Text Token and Localization, Links Token Video 7 - More Skin Tokens and Token Replacement Video 8 - Demonstration of the Object Tokens and Bug Fixing Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Basic is Best

    - by Eric A. Stephens
    Fellow foodies will recognize the recent movement towards "farm-to-table" restaurants. These venues attempt to simplify their menus and source ingredients as close to the source as possible. I had the opportunity to dine at such a restaurant the other evening. I was gushing about the appetizer to my server when she described the preparation for the item and then punctuated her comments with "basic is best". I reminded my fellow enterprise architect diners there was an architecture lesson in that statement. They rolled their eyes and chuckled. But they also knew I was right. I'm reminded of Frederick Brooks' book The Mythical Man Month and his latest The Design of Design. The former must read book talks about complexity. But he refrains from damning all complexity. The world we live in and enterprises we strive to transform with enterprise architecture are complicated organisms, much like the human body. But sometimes a simple solution is the best approach. Fewer applications (think: portfolio rationalization). Fewer components. Fewer lines of code. Whatever level of abstraction you are working at, less is more. I'm reminded of the enterprise architecture principle "Control Technical Diversity". At one firm I created pithy catch phrases for each principles. I named this one "Less is More". But perhaps another variation is what my server said the other night, "Basic is Best".

    Read the article

  • Nginx + PHP-FPM ignores no-cache headers

    - by Eric Winchell
    I'm using the following header on a php page. // Prevent page caching. header('Expires: Tue, 20 Oct 1981 05:00:00 GMT'); header('Cache-Control: no-store, no-cache, must-revalidate'); header('Cache-Control: post-check=0, pre-check=0', FALSE); header('Pragma: no-cache'); I'm also using a rand=999999999 (with a real random number) in the URLs. But pages are still being cached. Reload works, but first load is cached. Anyone know where I can change this?

    Read the article

  • managing openfire users and conversations from java application

    - by user1600405
    I need to control openfire users and conversations from my own java application, the number of users is more than 1000 , this is what I want to do exactly : Listening for new invitations to chat rooms and accept them automaticly ,listening for new messages for all users and listening for presence staus for all users . As well as I would like to send a new invitation to join a new chat room from specifics users ,send a new message from a user to a chat room and change the presence status of users . I don't know how I can do that , I tried Smack and it seems that I can only manage or control a single user. Any help is appreciated Thanks for your reponse .

    Read the article

  • Add Machine Key to machine.config in Load Balancing environment to multiple versions of .net framework

    - by davidb
    I have two web servers behind a F5 load balancer. Each web server has identical applications to the other. There was no issue until the config of the load balancer changed from source address persistence to least connections. Now in some applications I receieve this error Server Error in '/' Application. Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Source Error: The source code that generated this unhandled exception can only be shown when compiled in debug mode. To enable this, please follow one of the below steps, then request the URL: Add a "Debug=true" directive at the top of the file that generated the error. Example: or: 2) Add the following section to the configuration file of your application: Note that this second technique will cause all files within a given application to be compiled in debug mode. The first technique will cause only that particular file to be compiled in debug mode. Important: Running applications in debug mode does incur a memory/performance overhead. You should make sure that an application has debugging disabled before deploying into production scenario. How do I add a machine key to the machine.config file? Do I do it at server level in IIS or at website/application level for each site? Does the validation and decryption keys have to be the same across both web servers or are they different? Should they be different for each machine.config version of .net? I cannot find any documentation of this scenario.

    Read the article

  • How do I remove a malformed line from my sources.list?

    - by eminencejae
    I have unistalled and reinstalled the Ubuntu Software Center as per info I found in a similar thread and I got the same response about line 91 or something like that. I just tried to upload a screen shot but since I'm new it won't allow me to. I also can not figure out how to cut and paste anything so I have to hand type what the error screen says, both when I attempt to open the software center and nothing happens, when I try to enter commands into the terminal to uninstall, reinstall, whatever I get the same following: COULD NOT INTITIALIZE THE PACKAGE INFORMATION An unresolvable problem occured while initializing the package information Please report t:his bug against the 'update-manager' package and include the following error message: 'E: Malformed line 91 in source list/etc/apt/sources.list (dist parse) E: The list of sources could not be read., E: The package list of status file could not be parsed or opened. How do I report bugs? What can be done about this. I have searched and everything everyone says to do leads me back to the same line error message. So, I don't know how to get to line 91 in the source list; to tell you what it says. Sorry, I'm really new to this. That is what I need is to find out how to get there and fix what it says. I would really like to NOT have to re partition my hard drive and start from scratch, so I'm really looking forward to getting this problem solved. I need to be able to install new software.

    Read the article

  • How to move ubuntu 12.04 on another drive

    - by Maksim
    How I can move my ubuntu on another drive? I know about clonezilla but problem is that destination drive is smaller the source one. Gparted can't copy-paste partition if destination not the end last partition. I tried dpkg --selected-packages and apt-clone. First one just not install all my packages and removed existed that now I have no full unity and not my all packages. Second one just fail on configuration package. But before I did that way I copy-paste my /etc to new system. My partition table destination : gpt 1 1049kB 106MB 105MB fat32 EFI System ??????????? 2 106MB 12,1GB 12,0GB ext4 3 12,1GB 66,3GB 54,2GB ext4 source: msdos 1 1049kB 12,0GB 12,0GB primary ext4 ??????????? 2 12,0GB 492GB 480GB primary ext4 3 492GB 500GB 8107MB primary linux-swap(v1) Gpt not working with ubuntu that use grub 1.99. I don't know why but my laptop can't boot any device with uefi just black screen and ubuntu detect it on fresh install.

    Read the article

  • Anything to share a printer from 32-bit to 64-bit Windows?

    - by marklam
    I've got a printer which only has 32-bit drivers, so it's installed on a 32-bit machine (XP). I need it to appear as a printer (with duplex control etc) on a 64-bit machine (Vista). I can't just share it using Windows printer sharing because the 64-bit client requires drivers to connect to it. There's no 64-bit driver for a similar printer that works (using the new port named \\server\printername). I've tried the ghostscript approach but that doesn't seem to help with the duplex control etc. Printeranywhere doesn't support 64-bit OS yet. Is there another way to do this?

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part IV)

    So finally we get to the fun part the fruits of all of our middle-tier/back end labors of generating classes to interface with an XML data source that the previous posts were about can now be presented quickly and easily to an end user.  I think.  Well see.  Well be using a WPF window to display all of our various MFL information that weve collected in the two XML files, and well provide a means of adding, updating and deleting each of these entities using as little code as possible.  Additionally, I would like to dig into the performance of this solution as well as the flexibility of it if were were to modify the underlying XML schema.  So first things first, lets create a WPF project and include our xml data in a data folder within.  On the main window, well drag out the following controls: A combo box to contain all of the teams A list box to show the players of the selected team, along with add/delete player buttons A text box tied to the selected players name, with a save button to save any changes made to the player name A combo box of all the available positions, tied to the currently selected players position A data grid tied to the statistics of the currently selected player, with add/delete statistic buttons This monstrosity of a form and its associated project will look like this (dont forget to reference the DataFoundation project from the Presentation project): To get to the visual data binding, as we learned in a previous post, you have to first make sure the project containing your bindable classes is compiled.  Do so, and then open the Data Sources pane to add a reference to the Teams and Positions classes in the DataFoundation project: Why only Team and Position?  Well, we will get to Players from Teams, and Statistics from Players so no need to make an interface for them as well see in a second.  As for Positions, well need a way to bind the dropdown to ALL positions they dont appear underneath any of the other classes so we need to reference it directly.  After adding these guys, expand every node in your Data Sources pane and see how the Team node allows you to drill into Players and then Statistics.  This is why there was no need to bring in a reference to those classes for the UI we are designing: Now for the seriously hard work of binding all of our controls to the correct data sources.  Drag the following items from the Data Sources pane to the specified control on the window design canvas: Team.Name > Teams combo box Team.Players.Name > Players list box Team.Players.Name > Player name text box Team.Players.Statistics > Statistics data grid Position.Name > Positions combo box That is it!  Really?  Well, no, not really there is one caveat here in that the Positions combo box is not bound the selected players position.  To do so, we will apply a binding to the position combo boxs SelectedValue to point to the current players PositionId value: That should do the trick now, all we need to worry about is loading the actual data.  Sadly, it appears as if we will need to drop to code in order to invoke our IO methods to load all teams and positions.  At least Visual Studio kindly created the stubs for us to do so, ultimately the code should look like this: Note the weirdness with the InitializeDataFiles call that is my current means of telling an IO where to load the data for each of the entities.  I havent thought of a more intuitive way than that yet, but do note that all data is loaded from Teams.xml besides for positions, which is loaded from Lookups.xml.   I think that may be all we need to do to at least load all of the data, lets run it and see: Yay!  All of our glorious data is being displayed!  Er, wait, whats up with the position dropdown?  Why is it red?  Lets select the RB and see if everything updates: Crap, the position didnt update to reflect the selected player, but everything else did.  Where did we go wrong in binding the position to the selected player?  Thinking about it a bit and comparing it to how traditional data binding works, I realize that we never set the value member (or some similar property) to tell the control to join the Id of the source (positions) to the position Id of the player.  I dont see a similar property to that on the combo box control, but I do see a property named SelectedValuePath that might be it, so I set it to Id and run the app again: Hey, all right!  No red box around the positions combo box.  Unfortunately, selecting the RB does not update the dropdown to point to Runningback.  Hmmm.  Now what could it be?  Maybe the problem is that we are loading teams before we are loading positions, so when it binds position Id, all of the positions arent loaded yet.  I went to the code behind and switched things so position loads first and no dice.  Same result when I run.  Why?  WHY?  Ok, ok, calm down, take a deep breath.  Get something with caffeine or sugar (preferably both) and think rationally. Ok, gigantic chocolate chip cookie and a mountain dew chaser have never let me down in the past, so dont fail me now!  Ah ha!  of course!  I didnt even have to finish the mountain dew and I think Ive got it:  Data Context.  By default, when setting on the selected value binding for the dropdown, the data context was list_team.  I dont even know what the heck list_team is, we want it to be bound to our team players view source resource instead, like this: Running it now and selecting the various players: Done and done.  Everything read and bound, thank you caffeine and sugar!  Oh, and thank you Visual Studio 2010.  Lets wire up some of those buttons now There has got to be a better way to do this, but it works for now.  What the add player button does is add a new player object to the currently selected team.  Unfortunately, I couldnt get the new object to automatically show up in the players list (something about not using an observable collection gotta look into this) so I just save the change immediately and reload the screen.  Terrible, but it works: Lets go after something easier:  The save button.  By default, as we type in new text for the players name, it is showing up in the list box as updated.  Cool!  Why couldnt my add new player logic do that?  Anyway, the save button should be as simple as invoking MFL.IO.Save for the selected player, like this: MFL.IO.Save((MFL.Player)lbTeamPlayers.SelectedItem, true); Surprisingly, that worked on the first try.  Lets see if we get as lucky with the Delete player button: MFL.IO.Delete((MFL.Player)lbTeamPlayers.SelectedItem); Refresh(); Note the use of the Refresh method again I cant seem to figure out why updates to the underlying data source are immediately reflected, but adds and deletes are not.  That is a problem for another day, and again my hunch is that I should be binding to something more complex than IEnumerable (like observable collection). Now that an example of the basic CRUD methods are wired up, I want to quickly investigate the performance of this beast.  Im going to make a special button to add 30 teams, each with 50 players and 10 seasons worth of stats.  If my math is right, that will end up with 15000 rows of data, a pretty hefty amount for an XML file.  The save of all this new data took a little over a minute, but that is acceptable because we wouldnt typically be saving batches of 15k records, and the resulting XML file size is a little over a megabyte.  Not huge, but big enough to see some read performance numbers or so I thought.  It reads this file and renders the first team in under a second.  That is unbelievable, but we are lazy loading and the file really wasnt that big.  I will increase it to 50 teams with 100 players and 20 seasons each - 100,000 rows.  It took a year and a half to save all of that data, and resulted in an 8 megabyte file.  Seriously, if you are loading XML files this large, get a freaking database!  Despite this, it STILL takes under a second to load and render the first team, which is interesting mostly because I thought that it was loading that entire 8 MB XML file behind the scenes.  I have to say that I am quite impressed with the performance of the LINQ to XML approach, particularly since I took no efforts to optimize any of this code and was fairly new to the concept from the start.  There might be some merit to this little project after all Look out SQL Server and Oracle, use XML files instead!  Next up, I am going to completely pull the rug out from under the UI and change a number of entities in our model.  How well will the code be regenerated?  How much effort will be required to tie things back together in the UI?Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Life Cycle Navigator?

    - by C.W.Holeman II
    In many environments the file system directory structure and naming conventions attempt to allow one to use a file manager to navigate the life cycle of a document. This overloading of functions makes it difficult for users to handle the complexity. A file browser is a tool that lets the user navigate among files located in a directory structure to find a specific file. Whereas, when given a specific file, a life cycle navigator is a tool that lets the user navigate its life cycle from source to published copy and across versions. Does a Life Cycle Navigator exit? I see a user pointing at an object: Left mouse button displays the document Right mouse button has a Life Cycle Navigator (LCN) The LCN displays a tree for a specific document within a file manger, for example: Published 3.2 Current 3.1 3.0 +2.x +1.x +Archived +All Source Draft 3.2 Current 3.1 3.0 +2.x +1.x +Archived +All +Work Flow +Properties Or from a command line: $ lcn x.pdf --open_source_document | my_favorite_editor $ lcn x.pdf --show_published_version_info $ lcn x.pdf --show_previous_publish_versions_info See also, Life Cycle Navigator.

    Read the article

  • How to fix Windows XP from muting sounds on every startup?

    - by Rookie
    This started to happen few days ago: Every time I start up my computer, the global sound volume ("Playback") is set to muted (from the checkbox), in the sound control panel. So I have to open the control panel every time i start up my computer, and click that checkbox. I have got this bug before, long time ago. I have no idea why is this happening. Any ideas how to fix this or what is causing it? I have SP 3 and most updates installed a half year ago or less.

    Read the article

  • CPU and HD degradation on sourced based Linux distribution

    - by danilo2
    I was wondering for a long time if source based Linux distributions, like Gentoo or Funtoo are "destroying" your system faster than binary ones (like Fedora or Debian). I'm talking about CPU and hard drive degradation. Of course, when you're updating your system, it has to compile everything from source, so it takes longer and your CPU is used at hard conditions (it is warmer and more loaded). Such systems compile hundreds of packages weekly, so does it really matter? Does such a system degrade faster than binary based ones?

    Read the article

  • How do I fix a terrible system error on ubuntu 12.04

    - by Anonymous
    I don't know what happened, but one day my computer had some sort of a system error and could no longer update itself. The software center will not open, it will begin to initialize and then a message pops up saying theres a system error and needs to shut down the software center. Then another box pops up after I go to report it saying it was unable to identify source or package name. I also can't extract a zipped folder to anything, or reinstall Ubuntu from a USB boot drive anymore, it keeps telling me my my computer isn't compatible when I know for a fact it is, because thats how I got Ubuntu on here in the first place. the only thing I know about this error is that a message popped up after I went to check for updates saying to report the problem and include this message in the report: 'E:malformed line 56 in source list /etc/apt/sources.list (dist parse)' it also called it a bug. I just want to know how to either get rid of the bug completely or find some way to be able to reinstall Ubuntu again. I know it's not a lot of information, but It's all I can give. Sorry.

    Read the article

  • Brownfield Support for OVMSS

    - by Owen Allen
    The area of virtualization saw quite a few enhancements with version 12.2. There's one particular virtualization enhancement that can make a big difference for a lot of people: support for brownfield Oracle VM Servers for SPARC. Brownfield refers to Oracle VM Servers for SPARC that were created outside of Ops Center. In older versions of Ops Center, you couldn't really do anything with them - Ops Center could only manage OVM Servers that it created. If you had OVM Servers outside of Ops Center, you'd have to recreate them if you wanted to manage them. In 12.2, though, this problem is cleared up. You can discover and manage OVM Servers for SPARC that you created outside of Ops Center, so long as the LDom Manager is running. When you discover the control domain, all of the logical domains are automatically discovered and managed and appear under the control domain in the Asset tree. If you want to use server pools and migrate the logical domains to a different Oracle VM Server for SPARC system, you'll need to move the metadata to a shared library and use shared Fibre Channel or iSCSI LUNs for the guest domain storage and add the server to a server pool. See the Oracle VM Server for SPARC chapter for more information.

    Read the article

  • Indie devs working with publishers

    - by MrDatabase
    I'm an independent game developer considering working with a publisher. This question is very informative however I have more questions. Please give feedback on the following issues... I think this can be helpful to many indie devs in the same situation. Source code: is it common for developers to give the publisher the source code? Code quality: does this matter when working with a publisher any more so than when just working on your own (or in a small team)? Just wondering if developers working for the publisher might scoff at the code quality and perhaps influence the relationship between developer and publisher. Unique game concepts: are publishers generally biased towards new/novel game concepts? Intellectual property: if I send a playable demo to a publisher what's to stop them from just reproducing the new/novel game mechanic? I think the answer is basically nothing... but I'm wondering if this is a realistic concern. Revenue sharing: how does it work? what's a common ratio? 70/30? 30/70? Flaky publishers: how common is it for a publisher to "string along" developers for a while then just drop them? Can this be reconciled with a contract of some kind? And any other issues you've encountered or heard of.

    Read the article

  • Does Google Analytics exclude Campaign traffic from Facebook in the Social reports?

    - by user1612223
    For a while we have used campaign tags when putting posts on Facebook so that we can run campaign reports in Google analytics on those links. However it appears that traffic from those links are being excluded in Google's Social reports. For example between 7/20 and 8/19 I'm seeing 123 Visits where Facebook is the source in my Campaigns report, but only 29 Visits where Facebook is the source in my Social Sources report. Main questions: Does Google exclude campaign traffic from it's social reports? If it does, is there any way to reconcile that so that the traffic shows up in both reports? If it doesn't, what could be causing the vast discrepancy? One observer noted that we are setting the Medium to "Post" when passing the campaign parameters, and that Google may only allow "Referral" traffic in it's social reports (Just speculation). In that case we could potentially change the Medium to "Referral", but that would undermine some of our strategy in being able to set different mediums. I have also considered that maybe the campaign traffic came to the site several times, and the social report may count the same user as less visits, however over 70% of the Facebook campaign traffic is new traffic, so at a minimum there would need to be over 85 Visits on the Social side for that argument to be valid. I've done several searches for any information on this topic, and haven't run across much of anything. I did post the same question on Google's Product Forum and have not gotten a response. The title of that question was 'Facebook Campaign Traffic Not Showing in Social Reports'. The inability to pass campaign data on Facebook posts would make evaluating the performance of those specific posts very difficult, so I'm hoping there is a solution to this.

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >