Search Results

Search found 3722 results on 149 pages for 'choice'.

Page 133/149 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Unable to make the session state request to the session state server

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • Multiple data centers and HTTP traffic: DNS Round Robin is the ONLY way to assure instant fail-over?

    - by vmiazzo
    Hi, Multiple A records pointing to the same domain seem to be used almost exclusively to implement DNS Round Robin as a cheap load balancing technique. The usual warning against DNS RR is that it is not good for high availability. When 1 IP goes down clients will continue to use it for minutes. A load balancer is often suggested as a better choice. Both claims are not completely true: When the traffic is HTTP then, most of the HTML browsers are able to automatically try the next A record if the previous is down, without a new DNS look-up. Read here chapter 3.1 and here. When multiple data centers are involved then, DNS RR is the only option to distribute traffic across them. So, is it true that, with multiple data centers and HTTP traffic, the use of DNS RR is the ONLY way to assure instant fail-over when one data center goes down? Thanks, Valentino Edit: Off course each data center has a local Load Balancer with hot spare. It's OK to sacrifice session affinity for an instant fail-over. AFAIK the only way for a DNS to suggest a data center instead of another is to reply with just the IP (or IPs) associated to that data center. If the data center becomes unreachable then all those IP are also unreachables. This means that, even if smart HTML browsers are able to instantly try another A record , all the attempts will fail until the local cache entry expires and a new DNS lookup is done, fetching the new working IPs (I assume DNS automatically suggests to a new data center when one fail). So, "smart DNS" cannot assure instant fail-over. Conversely a DNS round-robin permits it. When one data center fail, the smart HTML browsers (most of them) instantly try the other cached A records jumping to another (working) data center. So, DNS round-robin doesn't assure session affinity or the lowest RTT but seems to be the only way to assure instant fail-over when the clients are "smart" HTML browsers. Edit 2: Some people suggest TCP Anycast as a definitive solution. In this paper (chapter 6) is explained that Anycast fail-over is related to BGP convergence. For this reason Anycast can employ from 15 minutes to 20 seconds to complete. 20 seconds are possible on networks where the topology was optimized for this. Probably just CDN operators can grant such fast fail-overs. Edit 3:* I did some DNS look-ups and traceroutes (maybe some expert can double check) and: The only CDN using TCP Anycast seems to be CacheFly, other operators like CDN networks and BitGravity use CacheFly. Seems that their edges cannot be used as reverse proxies. Therefore, they cannot be used to grant instant failover. Akamai and LimeLight seems to use geo-aware DNS. But! They return multiple A records. From traceroutes seems that the returned IPs are on the same data center. So, I'm puzzled on how they can offer a 100% SLA when one data center goes down.

    Read the article

  • Set up lnux box for hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP: To upgrade PHP to the latest version, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! cd /tmp #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm #rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm #rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. [will list all packages available in the IUS repo] rpm -qa | grep php [will list installed packages needed to be removed. the installed packages need to be removed before you can install the IUS packages otherwise there will be conflicts] #yum shell >remove php-gd php-cli php-odbc php-mbstring php-pdo php php-xml php-common php-ldap php-mysql php-imap Setting up Remove Process >install php53 php53-mcrypt php53-mysql php53-cli php53-common php53-ldap php53-imap php53-devel >transaction solve >transaction run Leaving Shell #php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) This process removes the old version of PHP and installs the latest. To upgrade mysql: Pretty much the same process as above with PHP #/etc/init.d/mysqld stop [OK] rpm -qa | grep mysql [installed mysql packages] #yum shell >remove mysql mysql-server Setting up Remove Process >install mysql51 mysql51-server mysql51-devel >transaction solve >transaction run Leaving Shell #service mysqld start [OK] #mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project The above upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Create a chroot jail to hold sftp user via rssh. This will force SCP/SFTP and will circumvent traditional FTP server setup. #cd /tmp #wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm #rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm #useradd -m -d /home/dev -s /usr/bin/rssh dev #passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. #vi /etc/rssh.conf Uncomment line allowscp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). Above instructions for SFTP appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure virtual interfaces/ip based virtual hosts for SSL, setting up a CA, or anything else would be appreciated.

    Read the article

  • what is the best mid/high-end class audio/music creation audio sound card?

    - by Chris
    Hello, I have a computershop myself, and I repair computers. But one of the things I really don't know (yet) is the performace od audio cards for music creation with midi. I have searched and searched and came up with some good reviews, but after browsing for a couple of hours I could't see the trees trough the forrest :-D (it's a dutch expression) At one moment I thought the M-Audio - Delta 1010LT would be a good PCIe card, later on I read that this card was released years ago. (but that could be false information) Also any personal expierence would be great, but not necessairy. I have searched a few cards, and I hope someone can help me make a choice for a friend of mine. He's buget is between $100 and $350 I know there are audio cards from $ 500 - $1850,- this is just too expensive. The following specs are crucial: ASIO Midi Mic in minimal 5.1, 7.1 recommended it's not for airplay, but just to compose music at home. using Ableton and midi keyboard. 1. M-Audio - Delta 1010LT: 8 x 8 analog I/O 2 mic preamps or line inputs S/PDIF digital I/O (coaxial) with 2-channel PCM SCMS copy protection control digital I/O supports surround-encoded AC-3 and DTS pass-through 1 x 1 MIDI I/O directly drive up to 7.1 surround (bass management software included) software controlled 36-bit internal DSP digital mixing/routing +4dbu/-10dBV operation individually switched in software word clock I/O for sample accurate device synchronization 2. RME HDSP 9632: * Stereo Analog Ein- und Ausgang, symmetrisch*, 24-Bit/192kHz, > 110 dB SNR * Optionale Erweiterungsboards mit je 4 symmetrischen Ein- und Ausgängen * Alle analogen I/Os voll 192 kHz-fähig, also keine Reduzierung der Kanalzahl * 1 x ADAT Digital In/Out, 96 kHz-fähig (S/MUX) * 1 x SPDIF Digital In/Out, 192 kHz-fähig * 1 x Breakout Kabel für koaxialen SPDIF-Betrieb* * Also bis zu 16 Ein-und Ausgänge gleichzeitig nutzbar! * 1 x Stereo Kopfhörerausgang, parallel zum analogen Ausgang, aber eigene Pegelanpassung * 1 x MIDI I/O für 16 Kanäle Hi-Speed MIDI über Breakout Kabel * DIGICheck, RMEs einzigartiges Meter- und Analysetool mit Spectral Analyser, Professionelle Level Meter 2/8/16-Kanalig, Vector Audio Scope und diversen weiteren Analysefunktionen * HDSP Meter Bridge: Frei skalierbare Levelmeter mit Peak- und RMS Berechnung in Hardware * TotalMix: 512-Kanal Mischer mit 40 Bit interner Auflösung 3. EMU 1212M (1212 M) PCIe: * Top kwaliteit convertors 24-bit/192kHz convertors. * Hardware gestuurde effecten. * DSP zero-latency hardware mixen en monitoring. * Analoge en digitale I/O plus MIDI. * EMU Production Tools Software Bundle - Cakewalk SONAR , Steinberg Cubase LE, Ableton Live E-MU Edition **EMU 1212M PCI-e inputs/outputs:** * 2 balanced jack inputs. * 2 balanced jack outputs. * 24-bit/192kHz ADAT I/O. * 24-bit/192kHz Coaxiale S/PDif I/O switchable to AES/EBU. * MIDI I/O. 4. M-Audio Audiophile 192: - Up to 24-bit/192kHz audio - 2 balanced analog inputs (1/4” TRS) - 2 balanced analog outputs (1/4” TRS) - S/PDIF digital I/O (coaxial RCA connectors) with 2-channel PCM - SCMS copy protection control - Digital I/O supports surround-encoded AC-3 and DTS pass-through - Direct hardware input monitoring via separate balanced 1/4” TRS monitor outputs - Software routing of inputs and outputs - Digital I/O can be routed to/from external effects - 16-channel MIDI I/O - ASIO, WDM, GSIF 2 and Core Audio driver support for compatibility with most applications - 64-bit driver support for Windows - PCI 2.2 compatibility - Apple G5 compatible - Incompatible exceptions - Includes Ableton Live Lite music production software, so you can make music right away - Works with other Delta cards Technical Specifcations: - Compatibility - ASIO - WDM - GSIF 2 - Core Audio

    Read the article

  • .AVI Files randomly cease to open, other strange errors too

    - by Ben Franchuk
    I Recently (a couple weeks ago) downloaded the complete series of Seinfeld, all in varying file type. I Watched them in sequence according to season and to airing date, and all was well. All of the files played fine with my media player of choice ("BS Player"), and once I had finished, I went onto watch some other TV I had previously downloaded (The U.S. Series of "The Office"), and after then, some other film and then some music, over the following weeks (keep in mind all of these files are all on the same Hard Drive). Later then, More recently, I Went back to watching Seinfeld. The episodes played well as they did before- with the exclusion of a few in Season 7. I Have not tested all of the episodes in the season, but upon inspection, the majority of them are experiencing this problem; the problem being simply that they don't open! BS Player says that the files are either damaged or that the codecs to play the files are not on my computer-- however I am certain that the files DO have the codecs, and I am pretty sure that they are NOT DAMAGED either. I Have played the files with other players (such as VLC, Media Player Classic, and Windows Media Player), too, only to the same result; of them not opening. Seemingly the only way that I can differentiate between a damaged file and a non-damaged file are the way that the icon shows in Windows Explorer. For example, the below image is how explorer shows the information of a file that is non-damaged... ...and below is how a damaged file appears... The most disturbing and confusing part of this, though, is the last episode in the season- It opens, but not as a video- Instead, as a 1 Hour, 16 Minute, and 35 Second Audio file! The file plays a song for the first 4 or so minutes, and then is pretty much silent (except for some extremely quiet noise) until the last minute or so, when a random array of chopped up sounds and beeping noises play. I Do not recognise the song at the beginning of the file, but by the sounds of it, it is a song by the artist "Mr. Oizo," who's complete works I downloaded a couple weeks before now; and a bit before then I had finished downloading season 9 (not affected by these problems) of Seinfeld. I'd also like to note that the file I told of earlier (which played audio instead of video) reads as the same size as the other files in the season (around 175 MB) and also opens as a video clip. I Have NEVER experienced any of these problems in the past, and they seem to be only effecting the one season of my downloaded TV. The problems have not arisen with any of the other files on my Hard Drive, or any of the files downloaded around the time or after the time of which I downloaded season 7 of Seinfeld- or at least to my noticing. I Use the hard drive these files are located on almost every day, so could that be the cause of these problems? Is this a sign that my HDD is soon going to die? If it helps, the HDD is a Western Digital MyBook 1.5 TB 7500 RPM. It is connected to the computer via U.S.B. 2.0. EDIT! I noticed that this problem is now occurring with Season 9 of Seinfeld- and, presumably, other files on the drive I have yet to check. Please, If you have ANY IDEA AT ALL on what may be causing this or how to fix it, do tell me!

    Read the article

  • SQL Server 2000 and SSL Encryption

    - by Angry_IT_Guru
    We are a datacenter that hsots a SQL Server 2000 environment which provides database services for a product we sell that is loaded as a rich-client applicatin at each of our many clients and their workstations. Currently today, the application uses straight ODBC connections from the client site to our datacenter. We need to begin encrypting the credentials -- since everything is clear-text today and the authentication is weakly encrypted -- and I'm trying to determine the best way to implement SSL on the server with minimizing the impact of the client. A few things, however: 1) We have our own Windows domain and all our servers are joined to our private domain. Our clietns no nothing of our domain. 2) Typically, our clients connect to our datacenter servers either by: a) Using TCP/IP address b) Using a DNS name that we publish via internet, zone transfers from our DNS servers to our customers, or the client can add static HOSTS entries. 3) From what I understand from enabling encryption is that I can go to the Network Utility and select the "encryption" option for the protocol that I wish to encrypt. Such as TCP/IP. 4) When the encryption option is selected, I have a choice of installing a third-party certificate or a self-signed. I have tested the self-signed, but do have potential issues. I'll explain in a bit. If I go with a third-party cert, such as Verisign, or Network solutions... what kind of certificate do I request? These aren't IIS certificates? When I go create a self-signed via Microsoft's certificate server, I have to select "Authentication certificate". What does this translate to in the third-party world? 5) If I create a self-signed certificate, I understand that the "issue to" name has to match the FQDN for the server that is running SQL. In my case, I have to use my private domain name. If I use this, what does this do for my clients when trying to connect to my SQL Server? Surely they cannot resolve my private DNS names on their network.... I've also verified that when the self-signed certificate is installed, it has to be in the local personal store for the user account that is running SQL Server. SQL Server will only start if the FQDN matches the "issue to" of the certificate and SQL is running under the account that has the certificate installed. If I use a self-signed certificate, does this mean I have to have every one of my clients install it to verify? 6) If I used a third-party certificate, which sounds like the best option, do all my clients have to have internet access when accessing my private servers of their private WAN connection to use to verify the certificate? What do I do about the FQDN? It sounds like they have to use my private domain name -- which is not published -- and can no longer use the one that I setup for them to use? 7) I plan on upgrading to SQL 2000 soon. Is setup of SSL any easier/better with SQL 2005 than SQL 2000? Any help or guiadance would be appreciated

    Read the article

  • Unable to make the session state request to the session state server.

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • Facebook connect displaying invite friends dialog and closing on completion

    - by Dougnukem
    I'm trying to create a Facebook Connect application that displays a friend invite dialog within the page using Facebook's Javascript API (through a FBMLPopupDialog). The trouble is to display a friend invite dialog you use a multi-friend form which requires an action="url" attribute that represents the URL to redirect your page to when the user completes or skips the form. The problem is that I want to just close the FBMLPopupDialog (the same behavior as if the user just hit the 'X' button on the popup dialog). The best I can do is redirect the user back to the page they were on basically a reload but they lose all AJAX/Flash application state. I'm wondering if any Facebook Connect developers have run into this issue and have a good way to simply display a friend invite "lightbox" dialog within their website where they don't want to "refresh" or "redirect" when the user finishes. The facebook connect JS API provides a FB.Connect.inviteConnectUsers, which provides a nice dialog but only connects existing users of your application who also have a Facebook account and haven't connected. http://bugs.developers.facebook.com/show%5Fbug.cgi?id=4916 function fb_inviteFriends() { //Invite users log("Inviting users..."); FB.Connect.requireSession( function() { //Connect succes var uid = FB.Facebook.apiClient.get_session().uid; log('FB CONNECT SUCCESS: ' + uid); //Invite users log("Inviting users..."); //Update server with connected account updateAccountFacebookUID(); var fbml = fb_getInviteFBML() ; var dialog = new FB.UI. FBMLPopupDialog("Weblings Invite", fbml) ; //dialog.setFBMLContent(fbml); dialog.setContentWidth(650); dialog.setContentHeight(450); dialog.show(); }, //Connect cancelled function() { //User cancelled the connect log("FB Connect cancelled:"); } ); } function fb_getInviteFBML() { var uid = FB.Facebook.apiClient.get_session().uid; var fbml = ""; fbml = '<fb:fbml>\n' + '<fb:request-form\n'+ //Redirect back to this page ' action="'+ document.location +'"\n'+ ' method="POST"\n'+ ' invite="true"\n'+ ' type="Weblings Invite"\n' + ' content="I need your help to discover all the Weblings and save the Internet! WebWars: Weblings is a cool new game where we can collect fantastic creatures while surfing our favorite websites. Come find the missing Weblings with me!'+ //Callback the server with the appropriate Webwars Account URL ' <fb:req-choice url=\''+ WebwarsFB.WebwarsAccountServer +'/SplashPage.aspx?action=ref&reftype=Facebook' label=\'Check out WebWars: Weblings\' />"\n'+ '>\n'+ ' <fb:multi-friend-selector\n'+ ' rows="2"\n'+ ' cols="4"\n'+ ' bypass="Cancel"\n'+ ' showborder="false"\n'+ ' actiontext="Use this form to invite your friends to connect with WebWars: Weblings."/>\n'+ ' </fb:request-form>'+ ' </fb:fbml>'; return fbml; }

    Read the article

  • Need additional help with binding multiple CommandParameters using MultiBinding

    - by Dave
    I need to have a command handler for a ToggleButton that can take multiple parameters, namely the IsChecked property of said ToggleButton, along with a constant value, which could be a string, byte, int... doesn't matter. I found this great question on SO and followed the answer's link and read up on MultiBinding and IMultiValueConverter. It went really smoothly until I had to write the MultiBinding, when I realized that I also need to pass a constant value and couldn't do something like <Binding Value="1" /> I then came across another similar question that Kent Boogaart answered, and then I started to think about ways that I could get around this. One possible way is to not use MVVM and simply add the Tag property to my ToggleButton, in which case my MultiBinding would look like this: <MultiBinding Converter="{StaticResource MyConverter}"> <MultiBinding.Bindings> <Binding Path="IsChecked" /> <Binding Path="Tag" /> </MultiBinding.Bindings> </MultiBinding> Kent had made a comment along the lines of, "if you're using MVVM you should be able to get around this issue". However, I'm not sure that's an option for me, even though I have adopted MVVM as my WPF pattern of necessity choice. The reason why I say this is that I have wayyyy more than one ToggleButton in the UserControl, and each of the ToggleButtons' Commands need to call the same function. But since they are ToggleButtons, I can't use the property bound to IsChecked in the ViewModel, because I don't know which one was last clicked. I suppose I could add another private property to keep track of this, but it seems a little silly. As far as the constant goes, I could probably get rid of this if I did the tracking idea, but not sure of any other way to get around it. Does anyone have good suggestions for me here? :) EDIT -- ok, so I need to update my bindings, which still don't work quite right: <ToggleButton Tag="1" Command="{Binding MyCommand}" Style="{StaticResource PassFailToggleButtonStyle}" HorizontalContentAlignment="Center" Background="Transparent" BorderBrush="Transparent" BorderThickness="0"> <ToggleButton.CommandParameter> <MultiBinding Converter="{StaticResource MyConverter}"> <MultiBinding.Bindings> <Binding Path="IsChecked" RelativeSource="{RelativeSource Mode=Self}" /> <Binding Path="Tag" RelativeSource="{RelativeSource Mode=Self}" /> </MultiBinding.Bindings> </MultiBinding> </ToggleButton.CommandParameter> </ToggleButton> IsChecked was working, but not Tag. I just realized that Tag is a string... duh. It's working now! The key was to use a RelativeSource of Self.

    Read the article

  • ASP.NET MVC 2: Updating a Linq-To-Sql Entity with an EntitySet

    - by Simon
    I have a Linq to Sql Entity which has an EntitySet. In my View I display the Entity with it's properties plus an editable list for the child entites. The user can dynamically add and delete those child entities. The DefaultModelBinder works fine so far, it correctly binds the child entites. Now my problem is that I just can't get Linq To Sql to delete the deleted child entities, it will happily add new ones but not delete the deleted ones. I have enabled cascade deleting in the foreign key relationship, and the Linq To Sql designer added the "DeleteOnNull=true" attribute to the foreign key relationships. If I manually delete a child entity like this: myObject.Childs.Remove(child); context.SubmitChanges(); This will delete the child record from the DB. But I can't get it to work for a model binded object. I tried the following: // this does nothing public ActionResult Update(int id, MyObject obj) // obj now has 4 child entities { var obj2 = _repository.GetObj(id); // obj2 has 6 child entities if(TryUpdateModel(obj2)) //it sucessfully updates obj2 and its childs { _repository.SubmitChanges(); // nothing happens, records stay in DB } else ..... return RedirectToAction("List"); } and this throws an InvalidOperationException, I have a german OS so I'm not exactly sure what the error message is in english, but it says something along the lines of that the entity needs a Version (Timestamp row?) or no update check policies. I have set UpdateCheck="Never" to every column except the primary key column. public ActionResult Update(MyObject obj) { _repository.MyObjectTable.Attach(obj, true); _repository.SubmitChanges(); // never gets here, exception at attach } I've read alot about similar "problems" with Linq To Sql, but it seems most of those "problems" are actually by design. So am I right in my assumption that this doesn't work like I expect it to work? Do I really have to manually iterate through the child entities and delete, update and insert them manually? For such a simple object this may work, but I plan to create more complex objects with nested EntitySets and so on. This is just a test to see what works and what not. So far I'm disappointed with Linq To Sql (maybe I just don't get it). Would be the Entity Framework or NHibernate a better choice for this scenario? Or would I run into the same problem?

    Read the article

  • Howto - Running Redmine on mongrel as a service on windows

    - by Achilles
    I use Redmine on Mongrel as a project manager and I use a batch file (start-redmine.bat) to start the redmine in mongrel. There are 2 issues with my setup: 1. I have a running IIS on the server that occupies the HTTP port (80) 2. The start-redmine.bat must be periodically checked to see if it's stopped after a restart that is caused by windows update service. for the first issue, I have no choice but running mongrel on a port like 3000 and for the second issue I have to create a windows service that runs automatically in the background when the windows starts; and here comes the trouble! There are at least 3 ways to run redmine as a service that I'm aware of; none of them can satisfy a performance view on this subject. you may read about them on http://stackoverflow.com/questions/877943/how-to-configure-a-rails-app-redmine-to-run-as-a-service-on-windows I tried them all. The easiest way to setup such a service is using mongrel_service approach; in 3 lines of command you're done. but the performance is significantly lower than running that batch file... Now, I wanna show you my approach: First suppose we have ruby installed into c:\ruby and we have issued the command gem install mongrel to get the mongrel gem installed into c:\ruby\bin Also, suppose we have installed the Redmine into a folder like c:\redmine; and we have ruby's path (i.e. c:\ruby\bin) in our PATH environment variable. Now Download and install the Windows NT Resource Kit Tools from microsoft website. Open the command-line tool that comes with the Resource Kit (from start menu). Use instsrv to install a dummy service called Redmine using the following command: "[path-to-instsrv.exe]\instsrv" Redmine "[path-to-srvany.exe]\srvany.exe" in my case (which is the default case) it was something like this: "C:\Program Files\Windows Resource Kits\Tools\instsrv" Redmine "C:\Program Files\Windows Resource Kits\Tools\srvany.exe" Now create the batch file. Open a notepad and paste these instructions into it and then save it as "c:\redmine\start-redmine.bat" @echo off cd c:\redmine\ mongrel_rails start -a 0.0.0.0 -p 3000 -e production Now we need to configure that dummy service we had created before. WATCH OUT WHAT YOU'RE DOING FROM HERE ON, OR YOU MAY CORRUPT YOUR WINDOWS. To configure that service, open windows registry editor (Start - Run - regedit) and navigate to this node: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Redmine Right-Click on "Redmine" node and using the context menu, create a new key called Parameters (New - Key) Right-Click on "Parameters" and create a String Value property called Application. Do this again and create another String Value called AppParameters. Now Double-click on "Application" and put cmd.exe into "Value data" section. Then Double-click on "AppParameters" and put /C "C:\redmine\start-redmine.bat" into Value data section. We're done! issue this command to run the redmine on mongrel as a service: net start Redmine

    Read the article

  • Sharing a UIView between UIViewControllers in a UITabBarController

    - by Wireless Designs
    Hi all - I have a UIScrollView that houses a gallery of images the user can scroll through. This view needs to be visible on each of three separate UIViewControllers that are housed within a UITabBarController. Right now, I have three separate UIScrollView instances in the UITabBarController subclass, and the controller manages keeping the three synchronized (when a user scrolls the one they can see, programmatically scrolling the other two to match, etc.), which is not ideal. I would like to know if there is a way to work with only ONE instance of the UIScrollView, but have it show up only in the UIViewController that the user is currently interacting with. This would completely eliminate all the synchronization code. Here is basically what I have now in the UITabBarController (which is where all this is currently managed): @interface ScrollerTabBarController : UITabBarController { FirstViewController *firstView; SecondViewController *secondView; ThirdViewController *thirdView; UIScrollView *scrollerOne; UIScrollView *scrollerTwo; UIScrollView *scrollerThree; } @property (nonatomic,retain) IBOutlet FirstViewController *firstView; @property (nonatomic,retain) IBOutlet SecondViewController *secondView; @property (nonatomic,retain) IBOutlet ThirdViewController *thirdView; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerOne; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerTwo; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerThree; @end @implementation ScrollerTabBarController - (void)layoutScroller:(UIScrollView *)scroller {} - (void)scrollToMatch:(UIScrollView *)scroller {} - (void)viewDidLoad { [self layoutScroller:scrollerOne]; [self layoutScroller:scrollerTwo]; [self layoutScroller:scrollerThree]; [scrollerOne setDelegate:self]; [scrollerTwo setDelegate:self]; [scrollerThree setDelegate:self]; [firstView setGallery:scrollerOne]; [secondView setGallery:scrollerTwo]; [thirdView setGallery:scrollerThree]; } - (void)scrollViewDidEndDecelerating:(UIScrollView *)scrollView { [self scrollToMatch:scrollView]; } @end The UITabBarController gets notified (as the scroll view's delegate) when the user scrolls one of the instances, and then calls methods like scrollToMatch: to sync up the other two with the user's choice. Is there something that can be done, using a many-to-one relationship on IBOutlet or something like that, to narrow this down to one instance so I'm not having to manage three scroll views? I tried keeping a single instance and moving the pointer from one view to the next using the UITabBarControllerDelegate methods (calling setGallery:nil on the current and setGallery:scrollerOne on the next each time it changed), but the scroller never moved to the other tabs. Thanks in advance!

    Read the article

  • Lots of questions about file I/O (reading/writing message strings)

    - by Nazgulled
    Hi, For this university project I'm doing (for which I've made a couple of posts in the past), which is some sort of social network, it's required the ability for the users to exchange messages. At first, I designed my data structures to hold ALL messages in a linked list, limiting the message size to 256 chars. However, I think my instructors will prefer if I save the messages on disk and read them only when I need them. Of course, they won't say what they prefer, I need to make a choice and justify the best I can why I went that route. One thing to keep in mind is that I only need to save the latest 20 messages from each user, no more. Right now I have an Hash Table that will act as inbox, this will be inside the user profile. This Hash Table will be indexed by name (the user that sent the message). The value for each element will be a data structure holding an array of size_t with 20 elements (20 messages like I said above). The idea is to keep track of the disk file offsets and bytes written. Then, when I need to read a message, I just need to use fseek() and read the necessary bytes. I think this could work nicely... I could use just one single file to hold all messages from all users in the network. I'm saying one single file because a colleague asked an instructor about saving the messages from each user independently which he replied that it might not be the best approach cause the file system has it's limits. That's why I'm thinking of going the single file route. However, this presents a problem... Since I only need to save the latest 20 messages, I need to discard the older ones when I reach this limit. I have no idea how to do this... All I know is about fread() and fwrite() to read/write bytes from/to files. How can I go to a file offset and say "hey, delete the following X bytes"? Even if I could do that, there's another problem... All offsets below that one will be completely different and I would have to process all users mailboxes to fix the problem. Which would be a pain... So, any suggestions to solve my problems? What do you suggest?

    Read the article

  • DynamicQuery: How to select a column with linq query that takes parameters

    - by Richard77
    Hello, We want to set up a directory of all the organizations working with us. They are incredibly diverse (government, embassy, private companies, and organizations depending on them ). So, I've resolved to create 2 tables. Table 1 will treat all the organizations equally, i.e. it'll collect all the basic information (name, address, phone number, etc.). Table 2 will establish the hierarchy among all the organizations. For instance, Program for illiterate adults depends on the National Institute for Social Security which depends on the Labor Ministry. In the Hierarchy table, each column represents a level. So, for the example above, (i)Labor Ministry - Level1(column1), (ii)National Institute for Social Security - Level2(column2), (iii)Program for illiterate adults - Level3(column3). To attach an organization to an hierarchy, the user needs to go level by level(i.e. column by column). So, there will be at least 3 situations: If an adequate hierarchy exists for an organization(for instance, level1: US Embassy), that organization can be added (For instance, level2: USAID).-- US Embassy/USAID, and so on. How about if one or more levels are missing? - then they need to be added How about if the hierarchy need to be modified? -- not every thing need to be modified. I do not have any choice but working by level (i.e. column by column). I does not make sense to have all the levels in one form as the user need to navigate hierarchies to find the right one to attach an organization. Let's say, I have those queries in my repository (just that you get the idea). Query1 var orgHierarchy = (from orgH in db.Hierarchy select orgH.Level1).FirstOrDefault; Query2 var orgHierarchy = (from orgH in db.Hierarchy select orgH.Level2).FirstOrDefault; Query3, Query4, etc. The above queries are the same except for the property queried (level1, level2, level3, etc.) Question: Is there a general way of writing the above queries in one? So that the user can track an hierarchy level by level to attach an organization. In other words, not knowing in advance which column to query, I still need to be able to do so depending on some conditions. For instance, an organization X depends on Y. Knowing that Y is somewhere on the 3rd level, I'll go to the 4th level, linking X to Y. I need to select (not manually) a column with only one query that takes parameters. Thanks for helping

    Read the article

  • Reliable way of generating unique hardware ID

    - by mr.b
    Question: what's the best way to accomplish following. I have to come up with unique ID for each networked client, such that: it (ID) should persist once client software is installed on target computer, and should continue to persist if software is re-installed on same computer and same OS installment, it should not change if hardware configuration is modified in most ways (except changing the motherboard) When hard drive with client software installed is cloned to another computer with identical hardware configuration (or, as similar as possible), client software should be aware of that change. A little bit of explanation and some back-story: This question is basically age old question that also touches topic of software copy-protection, as some of mechanisms used in that area are mentioned here. I should be clear at this point that I'm not looking for a copy-protection scheme. Please, read on. :) I'm working on a client-server software that is supposed to work in local network. One of problems I have to solve is to identify each unique client in network (not so much of a problem), so that I can apply certain attributes to every specific client, retain and enforce those attributes during deployment lifetime of a specific client. While I was looking for a solution, I was aware of following: Windows activation system uses some kind of heavy fingerprinting mechanism, that is extremely sensitive to hardware modifications, Disk imaging software copies along all Volume IDs (tied to each partition when formatted), and custom, uniquely generated IDs during installation process, during first run, or in any other way, that is strictly software in its nature, and stored in registry or on hard drive, so it's very easy to confuse two Obvious choice for this kind of problem would be to find out BIOS identifiers (not 100% sure if this is unique through identical motherboard models, though), as that's the only thing I can rely on, that isn't duplicated, transferred by cloning, and that can't be changed (at least not by using some user-space program). Everything else fails as either being not reliable (MAC cloning, anyone?), or too demanding (in terms that it's too sensitive to configuration changes). Am I missing something obvious here? Sub-question that I'd like to ask is, am I doing it correctly, architecture-wise? Perhaps there is a better tool for task that I have to accomplish... Another approach I had in mind is something similar to handshake mechanism, where server maintains internal lookup table of connected client IDs (which can be even completely software-based and non-unique at any given moment), and tells client to come up with different ID during handshake, if duplicate ID is provided upon connection. That approach, unfortunately, doesn't play nicely with one of requirements to tie attributes to specific client during lifetime.

    Read the article

  • Is MVVM pointless?

    - by joebeazelman
    Is orthodox MVVM implementation pointless? I am creating a new application and I considered Windows Forms and WPF. I chose WPF because it's future-proof and offer lots of flexibility. There is less code and easier to make significant changes to your UI using XAML. Since the choice for WPF is obvious, I figured that I may as well go all the way by using MVVM as my application architecture since it offers blendability, separation concerns and unit testability. Theoretically, it seems beautiful like the holy grail of UI programming. This brief adventure; however, has turned into a real headache. As expected in practice, I’m finding that I’ve traded one problem for another. I tend to be an obsessive programmer in that I want to do things the right way so that I can get the right results and possibly become a better programmer. The MVVM pattern just flunked my test on productivity and has just turned into a big yucky hack! The clear case in point is adding support for a Modal dialog box. The correct way is to put up a dialog box and tie it to a view model. Getting this to work is difficult. In order to benefit from the MVVM pattern, you have to distribute code in several places throughout the layers of your application. You also have to use esoteric programming constructs like templates and lamba expressions. Stuff that makes you stare at the screen scratching your head. This makes maintenance and debugging a nightmare waiting to happen as I recently discovered. I had an about box working fine until I got an exception the second time I invoked it, saying that it couldn’t show the dialog box again once it is closed. I had to add an event handler for the close functionality to the dialog window, another one in the IDialogView implementation of it and finally another in the IDialogViewModel. I thought MVVM would save us from such extravagant hackery! There are several folks out there with competing solutions to this problem and they are all hacks and don’t provide a clean, easily reusable, elegant solution. Most of the MVVM toolkits gloss over dialogs and when they do address them, they are just alert boxes that don’t require custom interfaces or view models. I’m planning on giving up on the MVVM view pattern, at least its orthodox implementation of it. What do you think? Has it been worth the trouble for you if you had any? Am I just a incompetent programmer or does MVVM not what it's hyped up to be?

    Read the article

  • Which OAuth library do you find works best for Objective-C/iPhone?

    - by Brennan
    I have been looking to switch to OAuth for my Twitter integration code and now that there is a deadline in less than 7 weeks (see countdown link) it is even more important to make the jump to OAuth. I have been doing Basic Authentication which is extremely easy. Unfortunately OAuth does not appear to be something that I would whip together in a couple of hours. http://www.countdowntooauth.com/ So I am looking to use a library. I have put together the following list. MPOAuth MGTwitterEngine OAuthConsumer I see that MPOAuth has some great features with a good deal of testing code in place but there is one big problem. It does not work. The sample iPhone project that is supposed to authenticate with Twitter causes an error which others have identified and logged as a bug. http://code.google.com/p/mpoauthconnection/issues/detail?id=29 The last code change was March 11 and this bug was filed on March 30. It has been over a month and this critical bug has not been fixed yet. So I have moved on to MGTwitterEngine. I pulled down the source code and loaded it up in Xcode. Immediately I find that there are a few dependencies and the README file does not have a clear list of steps to fetch those dependencies and integrate them with the project so that it builds successfully. I see this as a sign that the project is not mature enough for prime time. I see also that the project references 2 libraries for JSON when one should be enough. One is TouchJSON which has worked well for me so I am again discouraged from relying on this project for my applications. I did find that MGTwitterEngine makes use of OAuthConsumer which is one of many OAuth projects hosted by an OAuth project on Google Code. http://code.google.com/p/oauth/ http://code.google.com/p/oauthconsumer/wiki/UsingOAuthConsumer It looks like OAuthConsumer is a good choice at first glance. It is hosted with other OAuth libraries and has some nice documentation with it. I pulled down the code and it builds without errors but it does have many warnings. And when I run the new Build and Analyze feature in Xcode 3.2 I see 50 analyzer results. Many are marked as potential memory leaks which would likely lead to instability in any app which uses this library. It seems there is no clear winner and I have to go with something before the big Twitter OAuth deadline. Any suggestions?

    Read the article

  • What Is The Best Database For Delphi Desktop Applications That Supports Stored Procedures?

    - by Cape Cod Gunny
    I started with Turbo Pascal 3, went to TP5, Bought TP6 called Borland the next day and downgraded to TP5.5. Bought Delphi 3, and now have Delphi 5 Enterprise. I sort of lost interest in writing code about 4-5 years ago for two reasons; Spent all day writing ASP & SQL for someone else. PC Techniques magazine went away. I've got a few programs in the shareware market that are solid performers but are in need of serious updating. I love Delphi or did when it was Borland (before Borland bought DBase and all the other crap), I'd like to salvage as much of my D5E code as possible but I doubt I can. I plan on upgrading to Delphi 2010. My next software release needs to interact with a database. I'm very proficient with MS Sql and like to put all of the database code in stored procedures. What is the best database choice that interacts well with Delphi, allows stored procedures and is so easy to deploy that even the Geico gecko could deploy it? 10/25/2009 18:53 PM EST Re-Opened After Reading Install Docs for Delphi 2010 I downloaded a trial version of Delphi 2010 and unzipped the install. I've been reading the install docs included in the package. I started with the install.htm inside the zip package. install.htm wisely tells you to see the following two articles: Installation Notes: http://edn.embarcadero.com/article/39754 Release Notes: http://edn.embarcadero.com/article/39758 the release notes state the following... MSSQL driver requires the installation of the SQL Native Client. SQL Native Client 2008 is required for dbxmss.dll. SQL Native Client 2005 is required for dbxmss9.dll I checked my machine to see if SQL Native Client is installed. Nope. I wasn't done reading the docs so I made a note to install SQL Native Client. I googled dbxmss.dll and dbxmss9.dll and found a very interesting thread on the Embarcadero forums. read thread here. After reading this thread and some careful thought I don't think I will be using Microsoft SQL Express. I can't rely on my customers having the right drivers installed. So, I'm back to looking for a different solution. If I'm selling a $40 product to the general masses I need to have a bulletproof solution that doesn't require my brand new customer to update their machine before my software will work.

    Read the article

  • Java language features which have no equivalent in C#

    - by jthg
    Having mostly worked with C#, I tend to think in terms of C# features which aren't available in Java. After working extensively with Java over the last year, I've started to discover Java features that I wish were in C#. Below is a list of the ones that I'm aware of. Can anyone think of other Java language features which a person with a C# background may not realize exists? The articles http://www.25hoursaday.com/CsharpVsJava.html and http://en.wikipedia.org/wiki/Comparison_of_Java_and_C_Sharp give a very extensive list of differences between Java and C#, but I wonder whether I missed anything in the (very) long articles. I can also think of one feature (covariant return type) which I didn't see mentioned in either article. Please limit answers to language or core library features which can't be effectively implemented by your own custom code or third party libraries. Covariant return type - a method can be overridden by a method which returns a more specific type. Useful when implementing an interface or extending a class and you want a method to override a base method, but return a type more specific to your class. Enums are classes - an enum is a full class in java, rather than a wrapper around a primitive like in .Net. Java allows you to define fields and methods on an enum. Anonymous inner classes - define an anonymous class which implements a method. Although most of the use cases for this in Java are covered by delegates in .Net, there are some cases in which you really need to pass multiple callbacks as a group. It would be nice to have the choice of using an anonymous inner class. Checked exceptions - I can see how this is useful in the context of common designs used with Java applications, but my experience with .Net has put me in a habit of using exceptions only for unrecoverable conditions. I.E. exceptions indicate a bug in the application and are only caught for the purpose of logging. I haven't quite come around to the idea of using exceptions for normal program flow. strictfp - Ensures strict floating point arithmetic. I'm not sure what kind of applications would find this useful. fields in interfaces - It's possible to declare fields in interfaces. I've never used this. static imports - Allows one to use the static methods of a class without qualifying it with the class name. I just realized today that this feature exists. It sounds like a nice convenience.

    Read the article

  • Smooth animation using MatrixTransform?

    - by Mattias Konradsson
    I'm trying to do an Matrix animation where I both scale and transpose a canvas at the same time. The only approach I found was using a MatrixTransform and MatrixAnimationUsingKeyFrames. Since there doesnt seem to be any interpolation for matrices built in (only for path/rotate) it seems the only choice is to try and build the interpolation and DiscreteMatrixKeyFrame's yourself. I did a basic implementation of this but it isnt exactly smooth and I'm not sure if this is the best way and how to handle framerates etc. Anyone have suggestions for improvement? Here's the code: MatrixAnimationUsingKeyFrames anim = new MatrixAnimationUsingKeyFrames(); int duration = 1; anim.KeyFrames = Interpolate(new Point(0, 0), centerPoint, 1, factor,100,duration); this.matrixTransform.BeginAnimation(MatrixTransform.MatrixProperty, anim,HandoffBehavior.Compose); public MatrixKeyFrameCollection Interpolate(Point startPoint, Point endPoint, double startScale, double endScale, double framerate,double duration) { MatrixKeyFrameCollection keyframes = new MatrixKeyFrameCollection(); double steps = duration * framerate; double milliSeconds = 1000 / framerate; double timeCounter = 0; double diffX = Math.Abs(startPoint.X- endPoint.X); double xStep = diffX / steps; double diffY = Math.Abs(startPoint.Y - endPoint.Y); double yStep = diffY / steps; double diffScale= Math.Abs(startScale- endScale); double scaleStep = diffScale / steps; if (endPoint.Y < startPoint.Y) { yStep = -yStep; } if (endPoint.X < startPoint.X) { xStep = -xStep; } if (endScale < startScale) { scaleStep = -scaleStep; } Point currentPoint = new Point(); double currentScale = startScale; for (int i = 0; i < steps; i++) { keyframes.Add(new DiscreteMatrixKeyFrame(new Matrix(currentScale, 0, 0, currentScale, currentPoint.X, currentPoint.Y), KeyTime.FromTimeSpan(TimeSpan.FromMilliseconds(timeCounter)))); currentPoint.X += xStep; currentPoint.Y += yStep; currentScale += scaleStep; timeCounter += milliSeconds; } keyframes.Add(new DiscreteMatrixKeyFrame(new Matrix(endScale, 0, 0, endScale, endPoint.X, endPoint.Y), KeyTime.FromTimeSpan(TimeSpan.FromMilliseconds(0)))); return keyframes; }

    Read the article

  • Fulltext search for django : Mysql not so bad ? (vs sphinx, xapian)

    - by Eric
    I am studying fulltext search engines for django. It must be simple to install, fast indexing, fast index update, not blocking while indexing, fast search. After reading many web pages, I put in short list : Mysql MYISAM fulltext, djapian/python-xapian, and django-sphinx I did not choose lucene because it seems complex, nor haystack as it has less features than djapian/django-sphinx (like fields weighting). Then I made some benchmarks, to do so, I collected many free books on the net to generate a database table with 1 485 000 records (id,title,body), each record is about 600 bytes long. From the database, I also generated a list of 100 000 existing words and shuffled them to create a search list. For the tests, I made 2 runs on my laptop (4Go RAM, Dual core 2.0Ghz): the first one, just after a server reboot to clear all caches, the second is done juste after in order to test how good are cached results. Here are the "home made" benchmark results : 1485000 records with Title (150 bytes) and body (450 bytes) Mysql 5.0.75/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 7m14.146s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 Mysql 5.5.4 m3/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 6m08.154s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 1 thread, 100000 searchs with single word randomly taken from database : First run : 9m09s next run : 5m38s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:15.007353 1 thread, boolean search : 1000 x (+word1 +word2) First run : 0:00:21.205404 next run : 0:00:00.145098 Djapian Fulltext : ========================================================================== Full indexing : 84m7.601s 1 thread, 1000 searchs with single word randomly taken from database with prefetch : First run : 0:02:28.085680 next run : 0:00:14.300236 python-xapian Fulltext : ========================================================================== 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:26.402084 next run : 0:00:00.695092 django-sphinx Fulltext : ========================================================================== Full indexing : 1m25.957s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:30.073001 next run : 0:00:05.203294 1 thread, 100000 searchs with single word randomly taken from database : First run : 12m48s next run : 9m45s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:23.535319 1 thread, boolean search : 1000 x (word1 word2) First run : 0:00:20.856486 next run : 0:00:03.005416 As you can see, Mysql is not so bad at all for fulltext search. In addition, its query cache is very efficient. Mysql seems to me a good choice as there is nothing to install (I need just to write a small script to synchronize an Innodb production table to a MyISAM search table) and as I do not really need advanced search feature like stemming etc... Here is the question : What do you think about Mysql fulltext search engine vs sphinx and xapian ?

    Read the article

  • How to Practice Unix Programming in C?

    - by danben
    After five years of professional Java (and to a lesser extent, Python) programming and slowly feeling my CS education slip away, I decided I wanted to broaden my horizons / general usefulness to the world and do something that feels more (to me) like I really have an influence over the machine. I chose to learn C and Unix programming since I feel like that is where many of the most interesting problems are. My end goal is to be able to do this professionally, if for no other reason than the fact that I have to spend 40-50 hours per week on work that pays the bills, so it may as well also be the type of coding I want to get better at. Of course, you don't get hired to do things you haven't dont before, so for now I am ramping up on my own. To this end, I started with K&R, which was a great resource in part due to the exercises spread throughout each chapter. After that I moved on to Computer Systems: A Programmer's Perspective, followed by ten chapters of Advanced Programming in the Unix Environment. When I am done with this book, I will read Unix Network Programming. What I'm missing in the Stevens books is the lack of programming problems; they mainly document functionality and provide examples, with a few end-of-chapter questions following. I feel that I would benefit much more from being challenged to use the knowledge in each chapter ala K&R. I could write some test program for each function, but this is a less desirable method as (1) I would probably be less motivated than if I were rising to some external challenge, and (2) I will naturally only think to use the function in the ways that have already occurred to me. So, I'd like to get some recommendations on how to practice. Obviously, my first choice would be to find some resource that has Unix programming challenges. I have also considered finding and attempting to contribute to some open source C project, but this is a bit daunting as there would be some overhead in learning to use the software, then learning the codebase. The only open-source C project I can think of that I use regularly is Python, and I'm not sure how easy that would be to get started on. That said, I'm open to all kinds of suggestions as there are likely things I haven't even thought of.

    Read the article

  • From a 3D modeler to an iPhone app - what are best practices?

    - by bonkey
    I am quite new in 3D programming on iPhone and I would like to ask for hints about organizing a work between designers and programmers on that platform. Most of all: what kind of tools, libraries or plugins cooperate the best on both sides. Although I consider the question as looking for general best-practices advice I would like to find a solution for my current situation which I describe further, too. I've already done some research and found following libraries: SIO2 Khronos OpenGL ES 1.x SDK for PowerVR MBX Unity3D Oolong Game Engine I've checked modellers or plugins to them giving output formats readable by those tools: obj2opengl Wavefront OBJ to plain header file converter Blender with SIO2 exporter iphonewavefrontloader Cheetah3D PVRGeoPOD for 3DS / Maya Unfortunately I still have no clear vision how to combine any of that tools to get a desinger's work in an application. I look for a way of getting it in the most possible complete way: models, lights, scenes, textures, maybe some simple animations (but rather no game-like physics), but I still got nothing. And here comes my situation: I would like to find right way to present few (but quite complicated) models from a single scene. The designers mostly use 3DS Max 9, sometimes 10 (which partly prevents using PVRGeoPOD) and are rather reluctant to switch to something else but if there's no other choice I suppose it would be possible. The basic rule I've already found in some places "use Wavefront OBJ" not always works. I haven't got any acceptable results with production files, actually. The only things worked fine were some mere examples. Some of my models did imported incomplete, sometimes exporters hung or generated enormous files not really useful on an iPhone, sometimes enabling textures (with GL_TEXTURE_2D) just crashed an app. I know it might be a problem with too complicated models or my mistakes coming from inexeperience but I am not able to find any guidelines for that process to have streamlined cooperation with designers. I am even willing to write some things from scratch in pure OpenGL-ES if it's necessary, but I would like to avoid what might be avoided and get the most from the model files. The best would be the effect I saw on some SIO2 tutorials: export, build & go. But at that moment I've got only "import, wrong", "import, where are textures?", "import, that almost looks fine, export, hang" and so on... Is it really so much frustrating or I am just missed something obvious? Can anybody share his/her experience in that field and tell what kind of software uses for "making things happen"?

    Read the article

  • Is One Tool or a Suite of Tools Better for Scrum?

    - by Rob Wells
    G'day, Edit: We've been using Scrum very successfully for several years on several projects of varying sizes. In fact, our team developed the successful iPlayer project for the BBC using a classical Scrum approach. After using various combinations of tools, some high-tech, some low-tech, across these projects we now wish to try adopting a suitable tool suite. Our manager is to some extent attempting to force the adoption of a single suite of tools for Scrum. I've looked at the SO question "Best Scrum tools" and most people seem to recommend either: a suite of low-tech solutions, e.g. whiteboards, post-its, index cards, etc., or a monolithic tool that tries to satisfy as much as possible of the process, e.g. Agilo, Mingle, ScrumWorks, Target Process, etc. Our team is currently evaluating several different Scrum tools. However, we are looking at selecting a single, monolithic tool, e.g. Agilo. All of the "one-stop" solutions have their strengths and weaknesses with the serious enterprise type solutions being the best sort of fit. But all have some short comings. After reading the paper "Peer Code Review: An Agile Process" over at SmartBear I started wondering if we were trying to force adoption of a tool on a "best fit" basis. I think you can take a couple of reference artefacts of the Scrum development process, say user stories, epics and themes, and the code base which must use a well-known SCM, e.g. SVN, Hg, etc. Then if we take that as the common reference points for the tools employed then we would be able to use a group of tools to handle the different aspects of the Scrum process rather than try forcing a fit of a single tool would is a bit like forcing a square peg into the round hole. In this way, providing you've agreed your common reference points, you can use several tools, each performing their role better than a could be done by a single component in a monolithic tool suite. Is this a more sensible approach? Are the two reference points I mentioned above suitable, or is their a better choice of points where the tools would meet? cheers,

    Read the article

  • imagick showing script url instead of image

    - by Raz
    Hi, currently i'm trying to use imagick to generate some images without saving them on the server and then outputting to the browser, my method of choice was image magic with the imagick extension for php. I read the documentation, and i'm sure the package is installed on my machine (windows xp, with xampp). the class is installed imagick module enabled imagick module version 2.0.0-alpha imagick classes Imagick, ImagickDraw, ImagickPixel, ImagickPixelIterator ImageMagick version ImageMagick 6.3.3 04/21/07 Q16 http://www.imagemagick.org ImageMagick release date 04/21/07 ImageMagick Number of supported formats: 164 ImageMagick Supported formats A, ART, AVI, AVS, B, BIE, BMP, BMP2, BMP3, C, CACHE, CAPTION, CIN, CIP, CLIP, CLIPBOARD, CMYK, CMYKA, CUR, CUT, DCM, DCX, DFONT, DPS, DPX, EMF, EPDF, EPI, EPS, EPS2, EPS3, EPSF, EPSI, EPT, EPT2, EPT3, FAX, FITS, FRACTAL, FTS, G, G3, GIF, GIF87, GRADIENT, GRAY, HISTOGRAM, HTM, HTML, ICB, ICO, ICON, INFO, JBG, JBIG, JNG, JP2, JPC, JPEG, JPG, JPX, K, LABEL, M, M2V, MAP, MAT, MATTE, MIFF, MNG, MONO, MPC, MPEG, MPG, MSL, MSVG, MTV, MVG, NULL, O, OTB, OTF, PAL, PALM, PAM, PATTERN, PBM, PCD, PCDS, PCL, PCT, PCX, PDB, PDF, PFA, PFB, PGM, PGX, PICON, PICT, PIX, PJPEG, PLASMA, PNG, PNG24, PNG32, PNG8, PNM, PPM, PREVIEW, PS, PS2, PS3, PSD, PTIF, PWP, R, RAS, RGB, RGBA, RGBO, RLA, RLE, SCR, SCT, SFW, SGI, SHTML, STEGANO, SUN, SVG, SVGZ, TEXT, TGA, THUMBNAIL, TIFF, TILE, TIM, TTC, TTF, TXT, UIL, UYVY, VDA, VICAR, VID, VIFF, VST, WBMP, WMF, WMFWIN32, WMZ, WPG, X, XBM, XC, XCF, XPM, XV, XWD, Y, YCbCr, YCbCrA, YUV this is from the phpinfo so i know i have it installed, the thing is when i try to generate an image and save it, it works flawlessly, but when i try to output the image directly, i get the script url as an image $draw = new ImagickDraw(); $draw->setFont('AnkeCalligraph.TTF'); $draw->setFontSize(52); $draw->annotation(110, 110, "Hello World!"); $draw->annotation(50, 220, "Hello World!"); $canvas = new Imagick('./pictures/test_live.PNG'); $canvas->drawImage($draw); $canvas->setImageFormat('png'); header("Content-Type: image/png"); echo $canvas; this is the code used. if i use writeimage, then the file on the server is created with no problems. does anyone have any ideas what i'm doing wrong ?

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >