Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 914/1981 | < Previous Page | 910 911 912 913 914 915 916 917 918 919 920 921  | Next Page >

  • Tcpreplaying using VMware

    - by Methos
    This is more like a testbed setup question. I want to use VMware to debug some networking code in the linux kernel in the VM. My VM has two network interfaces. What I want to do is replay the capture file in the host and receive the packets in the VM. My problem is I do not see replayed packets in the VM. I am running VMware and tcpreplay on the host as sudo. Hence I think there should not be any problem access devices files. I am running VMware workstation 7.0 a. I first began with Custom networking as that provides option of creating your own virtual network name. I wrote /dev/vmnet3 and /dev/vmnet4 for the two interfaces respectively. However, after booting the guest, I did not see any of these interfaces or devices files (in /dev) created on the host. b. Then I tried 'Host Only', but that does not show what bridge/device file is associated with the interface. c. Finally I tried bridged networking mode. I see vmnet1, vmnet8 and vboxnet0 on the host. I have tcpreplayed the capture file on each of these interfaces, for all the above three cases. I tried to capture packets in the VM using "tcpdump -i any". However, I do not see any packets. Any ideas/pointers?

    Read the article

  • Sarg Daily Report not generated

    - by Suleman
    Ubuntu 12.04 LTS Squid 3 Sarg 2.3.2 following is my crontab configuration # SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin m h dom mon dow user command 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # this does not generate daily html report.

    Read the article

  • How to check DVD integrity at max read speed of DVD writer

    - by ashishsony
    I need to check the integrity of burned DVDs so that I can be sure about my backed-up data. I use DL-DVDs to take the backup. Earlier I used VSO Inspector software for the same but the day I switched to DL-DVDs the VSO Inspector gives me errors upon checking. I think the errors are because the switching of layer writing involves some dummy data somewhere. Secondly, it's damned slow for checking. I believe if there is a utility that can read all files (not the disk surface) and report if some files are unreadable would do the job. But it should be quick! Nobody wants to sit for disk checking for 3-4 hours after a quick 30 min data burn! I am looking for such a utility on Windows or Linux. Even scripts (python, etc) will do. I just want to be assured that the data is safe. Can someone help me in this? Thanks.

    Read the article

  • Windows Media Center doesn't see my movies

    - by DrJekyll
    I am trying to configure my Windows Media Center (Windows 7 Ultimate). I selected folder with my movies and added it to the library, but when I went to the movies library, it says "There are no items in this library yet - Windows Media Center is searching for media files in the background...". I have all necessary codecs installed, Windows Media Player opens those movies correctly. When I right click on the file - Open with - Windows Media Center it also plays them without any problem. Any ideas why they don't appear in the libraries? Edit: Movies are coded with divx and xvid codecs and they have ".avi" extension. Windows doesn't have problems playing them. I told Media Center where the files are. I even pointed Windows Media Center to a folder with only one .avi file it still couldn't find anything there. (I have given it quiet some time, even though searching in the directory with only one file shouldn't take more than a few seconds.) When I add a folder with a lot of movies, I get a dialog box "You can wait while media is added or select OK to continue using Windows Media Center.".                                                                        At the end it says it added about 90 movies, but when I go to the libraries, it's still empty.

    Read the article

  • Securing SSH/SFTP and best practices on security

    - by MultiformeIngegno
    I'm on a fresh VPS with Ubuntu Server 12.04. I wanted to ask you the good practices to apply to enhance security over a stock Ubuntu-server. This is what I did up to now: I added Google Authenticator to SSH, then I created a new user (whom I'll use instead of 'root' for SSH & SFTP access) which I added to my /etc/sudoers list below 'root', so now it's: # User privilege specification root ALL=(ALL:ALL) ALL new_user ALL=(ALL:ALL) ALL Then I edited sshd_config and set PermitRootLogin to 'no'. Then restarted the ssh service. Is this ok? There are a few things I'd like to ask you though: 1) What's the sense of adding a new (sudoer) user whilst the root user still exist (ok it can't access with root privilege but it's still there..)? 2) System files are owned by 'root'.. I want to use my new_user to access via SFTP but with it I can't edit those files!! Should I mass-CHMOD 'em so that new_user has write perms too? What's the good practice on this? Thanks in advance, I hope you'll tell me if I did something wrong and/or other ways to secure the system. :)

    Read the article

  • what is the fastest way to copy all data to a new larger hard drive?

    - by SUPER user
    I was certain this would have been covered before, but I cannot find an answer amongst all the almost-duplicates that come up; sorry if I've missed something obvious. I have a full 320gb disk inside my machine, a new 1tb disk to replace it, and a USB 2.0 chassis. It is only data on a single partition, no OS/apps involved, and the old drive will be kept somewhere as backup (no secure wiping etc). The simple option would be to put new disk in USB chassis, copy files, then swap them over. But for USB pen drives, reading is around 4x faster than writing. If the same is true for a USB SATA chassis (is it?) then it would be significantly faster to swap the drives first and read from the old drive over USB, right? Then the other consideration is that copying lots of files is usually slower than a single file of equivalent size. Is Windows 7 smart enough to do everything in a single lump like that, or is there specialised software that should be used instead? (Even if SATA-SATA copying is faster than involving USB, knowing what to do when it isn't an option is useful information.) Summary: Does a USB SATA chassis suffer from a read/write inequality? (like a USB pen drive does, but unlike a direct SATA connection) Can Windows 7 do sequential access? (I can't find confirmation if Robocopy does this.) Or is it necessary to use a bootable CD/USB with something like Clonezilla to achieve sequential copy speeds?

    Read the article

  • Global menu stopped working for most applications

    - by ltorgo
    I have Ubuntu 12.04 32 bits and I had been enjoying global menus and HUD since the beginning. However, for some strange reason, I stopped having global menus for most applications and thus no HUD fun anymore. Actually, the only two applications where I continue to have global menus are Thunderbird and Firefox, so not sure if it has something to do with these two... I've already browsed through some similar reports and already tried the following: checking if indicator-appmenu was installed. tried to use Unsettings to switch on and off, with reboots in the middle, global menus. logged in with a different user to see if it was some configuration of my user. also checked, using dconf-editor that, com ? canonical ? indicator ? appmenu ? menu-mode = "global" Any hint/help would be appreciated! Thank you in advance

    Read the article

  • How can I restore "Open With" context menu item in Windows 7?

    - by Izzy Helianthus
    I tried various way to fix this problem but ended up with a dead end. My problem would be the missing "Open With" context menu items (or subitems?). It did not appear even though I hovered it for a minutes or two. Below is a screenshot of the respective right-click menu. Note: The only problem with "Open With" is at the right-click menu (as well as FILE menu). Edited: The "Open With" context submenu that only accessible at the top, while the typical right click menu doesn't work. Repaste from Comment. I don't think it's involved with any windows files because other user in the same computer doesn't affected at all. I can see the "Open With" context submenu. I believe this must have involved with current user's registry. It happens to all files (any file types, except folder). I can only use Open With by clicking at the file and select it manually at the top of Explorer window. (Refer to the link for the screenshot)

    Read the article

  • How to change my W2k8 System Partition?

    - by Chris May
    On my Windows 2008 server, my C: is 1.5 TB, and the partition is marked as: Healthy (Boot, Page File, Active, Crash Dump, Primary Partition) and somehow I ended up with a 2GB D: that is marked as Healthy (System). On this D: drive are only a few MB worth of files (bootmgr, boot folder, bootsect.bak), but all Windows files are on the c:. I've done everything I can to remove the (System) mark. I tried using bcdedit, I tried marking the C:partition as "Active", I tried using bootsect.exe to assign the C: drive as the boot partition. Maybe I didn't do one of those steps correct, but I've tried everything I can. When I got my new Dell Poweredge T710, I didn't bother removing their 2 small drives before I put in my 2 new large drives. So I think when I installed W2k8 Server, maybe dell left some bootable partition on their drives to help me install the OS, but I never used it and just booted right from the install CD. Can anyone help me remove the (System) mark from the D: so I can remove the D: partition and still boot to the C:? I know I could remove the D: drives and reinstall windows, but I'm trying to avoid a total reinstall.

    Read the article

  • Why does Google report a soft 404 when I redirect to the signup page?

    - by Hettomei
    In the last month, I've got an increased number of "soft 404" errors reported by Google webmaster tools which actually work well for users. Configuration (maybe useless): I have a website built with rails 3.1 Authentication is handled by the gem Devise Problem: On this page http://en.bemyboat.com/yacht-charter/9965-sailboat-beneteau-oceanis-43 Click on "Ask a Boat request" (a simple form, in GET to: http://en.bemyboat.com/boat_requests/new/9965) You are redirected with the HTTP status 302 to sign in You are then sent back to the new page if successfully sign in. Google tells me that the link on "ask a boat request" returns a soft 404. I can't make this form in "POST" (which will solve the problem) because we need to automatically redirect users back to the page after sign in. (the Gem Devise memorizes the "get" link.) To simplify, the question is: How to protect a private page with authentication, reached with a simple "GET" and not to be penalized by Google as a "soft 404".

    Read the article

  • Creuna Platform

    - by csharp-source.net
    Creuna Platform is a an open source web application framework based on Microsoft .NET and is fully written in C#. The aim for Creuna Platform is to make life easier for system developers by providing a highly competent component toolkit that increases the productivity and quality of a system. The framework contains components for data access, configuration handling, messaging and a broad range of utility classes, controls and services. The framework also has several components for the EPiServer CMS. Creuna Platform is licensed under Affero GNU General Public License Version 3.

    Read the article

  • Remove server hangs, gets stuck. How to debug?

    - by bibstha
    I have an vps running on VmWare ESX with Ubuntu 8.04 LTS. It has been running smoothly for the past 3 months, however recently we've notices two strange bugs. a. The server hangs, today was second time. The nature of the hang is very strange. I can ping to the server server, it sends back response fine. However all other services like sshd, apache, mysql etc do not respond at all. When working, telnet servername 22 Escape character is '^]'. SSH-2.0-OpenSSH_5.X Debian-5ubuntu1 And other web services would run fine. When its hung, I can make tcp connections to 22 as well as 80 but receive no response at all. telnet servername 22 Escape character is '^]'. How can I debug this problem? Is there any daemons I can run that will periodically log status? Please tell me as to how to proceed with it. b. The another strange problem is that, of lately I am unable to transfer files larger than around 100KB, smaller files of around 1-2 KB works file. scp anotherserver:filename . or wget http://www.example.com/file would get stuck. There is still around 6GB of space remaining, so I don't think that is an issue. Any pointers where I should look into?

    Read the article

  • eAccelerator Issue - Cache Directory Empty.

    - by Tom
    Hi all, Hoping someone can give me a hand with this. I've recently installated eAccelerator 0.9.6.1 - On a CentOS LAMP server. Had it working fine, using the /tmp/accelerator as the cache directory. php.ini set up: zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20060613/eaccelerator.so" eaccelerator.shm_size="200" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="1" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="0" eaccelerator.shm_ttl="3600" eaccelerator.shm_prune_period="180" eaccelerator.shm_only="1" eaccelerator.compress="1" eaccelerator.compress_level="9" php -v output: PHP 5.2.12 (cli) (built: Feb 3 2010 00:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with eAccelerator v0.9.6.1, Copyright (c) 2004-2010 eAccelerator, by eAccelerator with the ionCube PHP Loader v3.3.20, Copyright (c) 2002-2010, by ionCube Ltd. I had to remove the cache directory as I was testing something. Remade it, re-set permissions and found that eAccelerator was no longer creating cache files within the folder. I thought it might be down to ownership rights on the folder so chown'd it apache.apache and this made no difference. I recreated the directory in /var/cache instead and editted php.ini to point to the new cache dir location, chmod'd, chown'd etc. and still eAccelerator is not creating any of the cache files in the directory (just empty). Could someone suggest what I might be doing incorrectly here. I've read through numerous pages to try and troubleshoot the issue to no avail. Any help appreciated.

    Read the article

  • Interactive console based CSV editor

    - by Penguin Nurse
    Although spreadsheet applications for editing CSV files on the console used to be one of the earliest killer applications for personal computers, only few of them and even less documentation about them is still actively maintained. After having done extensive search on the web, manpages and source code, I ended up with the following three applications that all have fundamental drawbacks: sc: abbrev. for spreadsheet calculator; nice tool with vi keybings, but it does not put strings containing the delimiter into quotas when exporting to delimiter separated format and can't import csv files correctly, i.e. all numbers are interpreted as strings GNU oleo: doesn't seem to be actively maintained any longer since 2001 and there are therefore no packages for major linux distributions teapot: offers packages for various operating systems, but uses for example counter-intuitive naming for cells (numbers for row and column, i.e. 11 seems to be intended to be row 1, column 1) and superfluous code for FLTK GUI Various Emacs modes also do not quote strings containing the delimiter well or are require much more typing for entering the scaffold of a table. Therefore I would be very grateful for overcoming one of theses drawbacks or any hints towards another console based CSV editor. It actually needn't do any calculations just editing cells or column- and rowise.

    Read the article

  • SQL Server 2008 Bring Database Online trying to open a file from a drive that doesn't exist

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have domain keys and active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas?

    Read the article

  • The MySQL service is in the status "starting" on windows

    - by andres descalzo
    I have several months working to "MySQL-5-1-47" on windows 2003. When I restarted, the service "MySQL" stay in this state "starting". The only way to raise the service was running the program directly: C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld This is the MySQL error log 100906 16:07:29 [Note] Event Scheduler: Purging the queue. 0 events 100906 16:07:32 InnoDB: Starting shutdown... 100906 16:07:37 [Note] Plugin 'FEDERATED' is disabled. 100906 16:07:38 InnoDB: Shutdown completed; log sequence number 0 44233 100906 16:07:38 [Note] mysqld: Shutdown complete 100906 16:07:39 InnoDB: Started; log sequence number 0 44233 100906 16:17:21 [Note] Plugin 'FEDERATED' is disabled. 100906 16:17:22 InnoDB: Started; log sequence number 0 44233 100906 16:22:01 [Note] Plugin 'FEDERATED' is disabled. 100906 16:22:02 InnoDB: Started; log sequence number 0 44233 100906 16:22:02 [Note] Event Scheduler: Loaded 0 events 100906 16:22:02 [Note] C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld.exe: ready for connections. Version: '5.1.47-community' socket: '' port: 3306 MySQL Community Server (GPL) The last lines are after loading the program from the shell Thank.

    Read the article

  • Used DBAN to wipe Lenovo u410 then installed Ubuntu. Need to install windows 8, but I get an error.

    - by David
    When I try to boot from my windows 8 USB (tested on a separate PC), I get this error text "The boot configuration data for your pc is missing or contains errors." I want to install windows again so I can fix a battery issue I'm having. Lenovo's power management app has an option to keep the battery at 60% if the laptop stays plugged in most of the time. I enabled this, then formatted without changing it back. I think there might be a raid setup that I may need to remove or something. I don't mind removing Ubuntu and starting fresh. I would just love some help with all this, I'm a noob when it comes to Linux.

    Read the article

  • laptop-mode-tools and harddrive spinoff

    - by sagarchalise
    So I wanted to extend my laptop battery life. After googleing a lot I found many tips and tricks. Some even in this site as well. Then I found this package in synaptic as well laptop-mode-tools. Now I am not well aware of what harddrive spinoffs are, so I have a dilemma of installing this package as it seems to remove acpi support as well. So my question is, how reliable is this package in battery life extension and what configurations should I use with it ? Also I stumbled upon some posts saying spinoffs may kill the harddrive as well. So can anyone clearify with some configuration tips especially for laptop-mode-tools. Thanks in advance

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • Un e-mail qui s'autosupprime après lecture pour garantir la confidentialité, ou même sans avoir été lu du tout : un brevet de AT&T

    Un e-mail qui s'autosupprime après lecture pour garantir la confidentialité Ou même sans lecture, un brevet déposé par AT&TSi vous n'avez pas eu votre dose de roman d'espionnage avec l'affaire Snowden/PRISM, ce brevet de AT&T va certainement vous combler.Le géant des télécoms américains vient en effet d'imaginer un e-mail à la « Mission Impossible » qui s'autosupprime.Deux options sont possibles : soit l'e-mail disparaît une fois ouvert, lu puis fermé. Soit l'e-mail est effacé à une date et une heure données, que le destinataire l'ait lu ou pas (les puristes feront remarquer que l'on n'est donc pas exactement dans la configuration imposée à Jim Phelps).

    Read the article

  • SOA Suite 11g (11.1.1.3): Creating a single server domain with SOA Suite and Oracle Service Bus func

    - by Chris Tomkins
    One of the major benefits of the latest version of the Oracle SOA products, SOA Suite 11gR1 PS2 (11.1.1.3) and Oracle Service Bus 11gR1 (11.1.1.3), is that both products depend on the same version of WebLogic Server 11g (10.3.3) and so can be installed in the same domain. If you are a developer building artifacts for both but with a laptop/desktop with limited resources then this is particularly useful, as you can install a single server domain (by default you get a domain with an admin server and multiple managed servers) which incorporates both sets of functionality. The viewlet below shows you how to go about creating such a single server domain, using the configuration wizard. Just click the Play button from the viewlet menu to start it. Note: As you have added all this functionality into a single server, it will take a few minutes to start up. As this is my first foray into the world of viewlets I’d be interested in your opinion as to whether you have found this useful and any tips/techniques for improving any future viewlets I may create. Technorati Tags: soa suite,osb,viewlet,weblogic,domain

    Read the article

  • HTTP Error: 413 Request Entity Too Large

    - by Torben Gundtofte-Bruun
    What I have: I have an iPhone app that sends HTTP POST requests (XML format) to a web service written in PHP. This is on a hosted virtual private server so I can edit httpd.conf and other files on the server, and restart Apache. The problem: The web service works perfectly as long as the request is not too large, but around 1MB is the limit. After that, the server responds with: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>413 Request Entity Too Large</title> </head><body> <h1>Request Entity Too Large</h1> The requested resource<br />/<br /> does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. </body></html> The web service writes its own log file, and I can see that small messages are processed fine. Larger messages are not logged at all so I guess that something in Apache rejects them before they even reach the web service? Things I've tried without success: (I've restarted Apache after every change. These steps are incremental.) hosting provider's web-based configuration panel: disable mod_security httpd.conf: LimitXMLRequestBody 0 and LimitRequestBody 0 httpd.conf: LimitXMLRequestBody 100000000 and LimitRequestBody 100000000 httpd.conf: SecRequestBodyLimit 100000000 At this stage, Apache's error.log contains a message: ModSecurity: Request body no files data length is larger than the configured limit (1048576) It looks like my step #4 didn't really take, which is consistent with step #1 but does not explain why mod_security appears to be active after all. What more can I try, to get the web service to receive large messages?

    Read the article

  • View/ViewModel Interaction - Bindings, Commands and Triggers

    It looks like I have a set of posts on ViewModel, aka MVVM, that have organically emerged into a series or story of sorts. Recently, I blogged about The Case for ViewModel, and another on View/ViewModel Association using Convention and Configuration, and a long while back now, I posted an Introduction to the ViewModel Pattern as I was myself picking up this pattern, which has since become the natural way for me to program client applications. This installment adds to this on-going series. I've...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • View/ViewModel Interaction - Bindings, Commands and Triggers

    It looks like I have a set of posts on ViewModel, aka MVVM, that have organically emerged into a series or story of sorts. Recently, I blogged about The Case for ViewModel, and another on View/ViewModel Association using Convention and Configuration, and a long while back now, I posted an Introduction to the ViewModel Pattern as I was myself picking up this pattern, which has since become the natural way for me to program client applications. This installment adds to this on-going series. I've...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 910 911 912 913 914 915 916 917 918 919 920 921  | Next Page >