Search Results

Search found 8238 results on 330 pages for 'dynamic disks'.

Page 120/330 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Nvidia RAID 1 Problem. Degraded drives...

    - by Vedat Kursun
    I had a RAID 1 on my system which has a Gigabyte GA 8N SLI motherboard with a Nvidia chipset.(Nvidia Raid IDE ROM BIOS 4.84) When the system was working probably there used to be an icon on the system try which showed my two RAID disks. Bu after my friend accidentally clicked on the "Remove drive safely" icon while trying to disconnect her USB, I noticed that the RAID system wasn't working. After a reboot there was suddenly a failure message during boot screen. When I enter the Nvidia RAID setup utility (F10) I can see that both drives are degraded and that won't change even if I get into them and press R for Rebuild. Other options are only Delete and Exit. When I boot to Windows (XP Pro 32 Bit) I can see both my disks with the same data on each of them but my RAID 1 is broken. It's a relief to see that at least my RAID 1 was active but it's annoying not being able to rebuild it. Is there a way where I can rebuild my RAID 1 without having to delete the array and build it again? Cause I don't want to backup 400 Gigs of data and then recopy it to my drives... (Disks 2 x Seagate ST3500418 AS SATA Drives)

    Read the article

  • Nvidia RAID 1 Problem. Degraded drives...

    - by Vedat Kursun
    I had a RAID 1 on my system which has a Gigabyte GA 8N SLI motherboard with a Nvidia chipset.(Nvidia Raid IDE ROM BIOS 4.84) When the system was working probably there used to be an icon on the system try which showed my two RAID disks. Bu after my friend accidentally clicked on the "Remove drive safely" icon while trying to disconnect her USB, I noticed that the RAID system wasn't working. After a reboot there was suddenly a failure message during boot screen. When I enter the Nvidia RAID setup utility (F10) I can see that both drives are degraded and that won't change even if I get into them and press R for Rebuild. Other options are only Delete and Exit. When I boot to Windows (XP Pro 32 Bit) I can see both my disks with the same data on each of them but my RAID 1 is broken. It's a relief to see that at least my RAID 1 was active but it's annoying not being able to rebuild it. Is there a way where I can rebuild my RAID 1 without having to delete the array and build it again? Cause I don't want to backup 400 Gigs of data and then recopy it to my drives... (Disks 2 x Seagate ST3500418 AS SATA Drives)

    Read the article

  • How to fix missing RAID1 drive

    - by Sodved
    I had to do some fiddling about with my cables inside thebox and now I am getting a "Critical Error" about the RAID disks during startup. I have a gigabyte GA-MA770T-UD3 motherboard. Aparently the RAID controller is an AMD SB710 chip. I'm pretty sure I know what happenned. The first time I rebooted I had forgotten the power cable on one of the disks in the RAID1 (mirror) and let it boot up. So I shut down and put the power back in. So now when it boots up I go into the RAID admin interface (between the BIOS screen and the OS loading): it shows the RAID1 as in error the logical device has one disk and says the other is disconnected or missing the other physical disk shows up as a single disk If I boot to the OS (Windows 7 32 bit) the data all seems to be there. If I go into computer management it says my partition is on a disk and working OK. But the other disk is offline because: "The disk is offline because it has a signature collision with another disk that is online" So I am guessing because I STUPIDLY booted up with only one of the disks powered on, the other disk fell out of synch with the mirror and so now cannot rejoin the mirror. How do I fix this? I want to get the RAID1 mirror working again. There does not appear to be any "Repair" option in the basic RAID admin tool which I get into during startup before the OS boots. I have not made any explicit changes to the online one (but I guess the OS has probably written some admin data).

    Read the article

  • Could I have destroyed Partitioning-Scheme/Filesystem of HDDs with External Harddrive Case with builtin Raid-Controller?

    - by th3m3s
    I had just recently bought a Fantec QB-35US3R to have a nice box on my desk to make some backups to. Along with the HDD-Bay I had ordered some 4TB HDDs to let them run in Raid 5, which is handled by the hardware RAID controller of the Fantec HDD-Bay. The QB-35US3R arrived a few days before the hard drives, so I got impatient and had the idea to put three old 1TB disks in the Fantec device, just to test it... Long story short: I made a backup of the most important data on these three disks before they broke. I had set the configuration scheme to RAID 3 at the Fantec device. It seems, that the Fantec RAID controller has "somehow" destroyed the partitioning scheme or the file system, because when put into a HDD docking station, they get recognized by the OS (Ubuntu/Linux) but are not mountable anymore. I tried to recover the data from one HDD via gParted (parted), which ran some hours without success. Here I stopped, before trying other tools, cos I read that the longer a hard drive is running after a the partitioning got destroyed, the worse it gets. What could the HDD-Bay probably have done to my lovely hard drive disks? Is there some routine a RAID controller is executing, when it wants to create a RAID system? Like erasing the partition table (seems not plausible to me.) or writing some information to every hard drive in the RAID (seems more likely to me.)? Is there a chance to recover the data from these HDDs, or is the change a RAID controller makes so significant, that no software is of help?

    Read the article

  • .htaccess allow from hostname?

    - by Mikey B
    Ubuntu 9.10 Apache2 Hi Guys, Long story short, I need to restrict access to a certain part of my web site based on a dynamic IP source address that changes every now and then. Historically, I've just added the following to htaccess... order deny,allow deny from all # allow my dynamic IP address allow from <dynamic ip> But the problem is that I'll have to manually make this change every time the IP changes. Ideally I'd like to specify a hostname instead... something like: order deny,allow deny from all # allow my host allow from hostname.whatever.local That doesn't seemed to have worked though. I get an error 403 - access forbidden. Does .htaccess not support hostnames?

    Read the article

  • Windows 2003 DNS updates from ISC DHCP server

    - by wolfgangsz
    We have a very mixed network, with most clients being Debian Lenny, the rest Windows XP/Vista/7. The network itself is split into two segments (for technical reasons) called "corporate" and "engineering". On the "corporate" side all clients get their IP addresses from a Windows DHCP server and the dynamic updates into the Windows DNS work just fine. On the "engineering" side, clients get their IP addresses from a linux machine running the standard ISC DHCP server. Although this server is configured to do dynamic DNS updates, they actually don't work. Anybody got any advice on how to fix this? Please note: dynamic updates from the clients directly into the DNS would work, but are not an option for us. So this is strictly on how make this work from an ISC DHCP server to a Windows DNS server.

    Read the article

  • Relay thru external SMTP server on Exchange 2010

    - by MadBoy
    My client has dynamic IP on which he hosts Exchange 2010 with POP3 Connector running and gathering emails from his current hosting. Until he gets static IP he wants to send emails out. This will work most of the time but some servers won't accept such email sent by Exchange (from dynamic ip due to multiple reasons) so I would like to make a relay thru external SMTP server which hosts current mailboxes. Normally SMTP server could be set up to allow relay thru it but this would require static IP to be allowed on that server so it would know which IP is allowed to relay thru it. Or is there a way to setup relay in Exchange 2010 so it can use dynamic IP and kinda authenticates with user/password itself on the hosted server?

    Read the article

  • Detect when application is running in a VM environment

    - by Malcolm
    Looking for ideas on how to detect when our Windows application is running in one of the following VM environments. Some starter ideas for detection are in parentheses. There may be (much) better detection techniques - the starter ideas I've come up with are based on my Google research. VMWare (looking for the presence of optional VMware Tools is one way) Microsoft Virtual PC (have a device named "Virtual HD" for their IDE disks, "MS Virtual SCSI Disk Device" for their SCSI disks) Citrix Xen Sun Virtual Box Thank you, Malcolm

    Read the article

  • Assign two static IP addresses to one mac address

    - by Timo Ylikännö
    Can Isc-dhcp-server give two static ip addresses to one mac address? I have several home terminals in my network. Each terminal have two interfaces. One for public traffic and one for a management traffic. Both interfaces have same mac address. DHCP server can detect interfaces via dhcp option field and dhcp class declarations. Every terminal have to have static ip address instead of dynamic address. With dynamic address and dynamic pools this would be an easy task. Or is there any dhcp server that can do this?

    Read the article

  • g++ symbol versioning. Set it to GCC_3.0 using version 4 of g++

    - by Ismael
    Hi all I need to implemente a Java class which uses JNI to control a fiscal printer in XUbuntu 8.10 with sun-java6-jdk installed. The structure is the following: EpsonDriver.java loads libEpson.so libEpson is linked dynamically with EpsonFiscalProtocol.so ( provided by Epson, no source available ) and pthread I use javah to generate the header file, and the code compiles. Then I put the libEpson.so in $JAVA_HOME/jre/lib/i386, and EpsonDriver.java uses an static initializar System.loadLibrary("libEpson") That part works, however, when I try to use any of the methods I get an unsatisfiedLinkError exception. Some time ago, a coworker did a version that works, and using objdump -Dslx I got the following: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x0000ccc4 memsz 0x0000ccc4 flags r-x LOAD off 0x0000d000 vaddr 0x0000d000 paddr 0x0000d000 align 2**12 filesz 0x00000250 memsz 0x00044a5c flags rw- DYNAMIC off 0x0000d014 vaddr 0x0000d014 paddr 0x0000d014 align 2**2 filesz 0x000000f0 memsz 0x000000f0 flags rw- NOTE off 0x000000d4 vaddr 0x000000d4 paddr 0x000000d4 align 2**2 filesz 0x00000024 memsz 0x00000024 flags r-- STACK off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**2 filesz 0x00000000 memsz 0x00000000 flags rw- Dynamic Section: NEEDED EpsonFiscalProtocol.so NEEDED libpthread.so.0 NEEDED libstdc++.so.6 NEEDED libm.so.6 NEEDED libc.so.6 SONAME libcom_tichile_jpos_EpsonSerialDriver.so INIT 0x00007254 FINI 0x0000ba08 GNU_HASH 0x000000f8 STRTAB 0x00001f50 SYMTAB 0x00000ae0 STRSZ 0x00002384 SYMENT 0x00000010 PLTGOT 0x0000d108 PLTRELSZ 0x00000008 PLTREL 0x00000011 JMPREL 0x0000724c REL 0x000045c4 RELSZ 0x00002c88 RELENT 0x00000008 TEXTREL 0x00000000 VERNEED 0x00004564 VERNEEDNUM 0x00000002 VERSYM 0x000042d4 RELCOUNT 0x000000ac Version References: required from libstdc++.so.6: 0x056bafd3 0x00 05 CXXABI_1.3 0x08922974 0x00 04 GLIBCXX_3.4 required from libc.so.6: 0x0b792650 0x00 03 GCC_3.0 0x0d696910 0x00 02 GLIBC_2.0 In the recently compiled file I get: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x00005300 memsz 0x00005300 flags r-x LOAD off 0x00005300 vaddr 0x00006300 paddr 0x00006300 align 2**12 filesz 0x00000274 memsz 0x00010314 flags rw- DYNAMIC off 0x00005314 vaddr 0x00006314 paddr 0x00006314 align 2**2 filesz 0x000000e0 memsz 0x000000e0 flags rw- EH_FRAME off 0x00004a00 vaddr 0x00004a00 paddr 0x00004a00 align 2**2 filesz 0x00000154 memsz 0x00000154 flags r-- Dynamic Section: NEEDED libstdc++.so.5 NEEDED libm.so.6 NEEDED libgcc_s.so.1 NEEDED libc.so.6 SONAME EpsonFiscalProtocol.so INIT 0x00001cb4 FINI 0x00004994 HASH 0x000000b4 STRTAB 0x00000da4 SYMTAB 0x000004f4 STRSZ 0x00000acf SYMENT 0x00000010 PLTGOT 0x0000640c PLTRELSZ 0x00000270 PLTREL 0x00000011 JMPREL 0x00001a44 REL 0x000019dc RELSZ 0x00000068 RELENT 0x00000008 VERNEED 0x0000198c VERNEEDNUM 0x00000002 VERSYM 0x00001874 RELCOUNT 0x00000004 Version References: required from libstdc++.so.5: 0x056bafd2 0x00 04 CXXABI_1.2 required from libc.so.6: 0x09691f73 0x00 03 GLIBC_2.1.3 0x0d696910 0x00 02 GLIBC_2.0 So I suspect the main diference is the GCC_3.0 symbol I compile libcom_tichile_EpsonSerialDriver.so with the following command ( from memory as I not at work right now ) g++ -Wl,-soname=.... -shared -I/*jni libraries*/ -o libcom_tichile_jpos_EpsonSerialDriver -lEpsonFiscalProtocol -lpthread Is there any way to tell g++ to use that symbol version? Or any idea in how to make it work? EDIT: I have another non-working version with the followin dump: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x0000bf68 memsz 0x0000bf68 flags r-x LOAD off 0x0000cc0c vaddr 0x0000cc0c paddr 0x0000cc0c align 2**12 filesz 0x000005e8 memsz 0x00044df0 flags rw- DYNAMIC off 0x0000cc20 vaddr 0x0000cc20 paddr 0x0000cc20 align 2**2 filesz 0x000000f8 memsz 0x000000f8 flags rw- EH_FRAME off 0x0000b310 vaddr 0x0000b310 paddr 0x0000b310 align 2**2 filesz 0x000002bc memsz 0x000002bc flags r-- STACK off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**2 filesz 0x00000000 memsz 0x00000000 flags rw- RELRO off 0x0000cc0c vaddr 0x0000cc0c paddr 0x0000cc0c align 2**0 filesz 0x000003f4 memsz 0x000003f4 flags r-- Dynamic Section: NEEDED EpsonFiscalProtocol.so NEEDED libpthread.so.0 NEEDED libstdc++.so.6 NEEDED libm.so.6 NEEDED libgcc_s.so.1 NEEDED libc.so.6 SONAME libcom_tichile_jpos_EpsonSerialDriver.so INIT 0x000055d8 FINI 0x0000a968 HASH 0x000000f4 GNU_HASH 0x00000a30 STRTAB 0x00002870 SYMTAB 0x00001410 STRSZ 0x00002339 SYMENT 0x00000010 PLTGOT 0x0000cff4 PLTRELSZ 0x00000168 PLTREL 0x00000011 JMPREL 0x00005470 REL 0x00004ea8 RELSZ 0x000005c8 RELENT 0x00000008 VERNEED 0x00004e38 VERNEEDNUM 0x00000002 VERSYM 0x00004baa RELCOUNT 0x00000001 Version References: required from libstdc++.so.6: 0x056bafd3 0x00 05 CXXABI_1.3 0x08922974 0x00 03 GLIBCXX_3.4 required from libc.so.6: 0x09691f73 0x00 06 GLIBC_2.1.3 0x0d696914 0x00 04 GLIBC_2.4 0x0d696910 0x00 02 GLIBC_2.0 Now I think the main difference is in the GCC_3.0 symbol/ABI EDIT: Luckily, a coworker found a way to talk to the printer using Java

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • How to increase signal/range of your Wi-Fi antenna-less repeater/booster over the network?

    - by kenorb
    I've BT Home Hub in the upper flat (2-3 walls behind) and I'm using WPS Wireless-N Wifi Range Router Repeater Extender in my flat where I'm using my laptop. These are antenna-less devices. Are there any life-hack tricks to increase signal/range of my repeater without buying the new more powerful repeater? I've tried already to move my repeater closer to the ceiling or putting the aluminium foil underneath, but it didn't help. Are there any methods, specific plates or materials which can boost the signal? Specification: Model: WN518W2 Frequency range: 2.4-2.4835GHz Wireless transmit power: 14 ~17 dBm (Typical) Wireless Signal Rates With Automatic Fallback: 11n: Up to 300Mbps(dynamic), 11g: Up to 54Mbps(dynamic), 11b: Up to 11Mbps(dynamic) Modulation Technology: DBPSK, DQPSK, CCK, OFDM, 16-QAM, 64-QAM Receiver Sensitivity: 300M: -68dBm@10% PER / 150M: -68dBm@10% PER / 108M: -68dBm@10% PER / 54M: -68dBm@10% PER / 11M: -85dBm@8% PER / 6M: -88dBm@10% PER / 1M: -90dBm@8% PER Product dimensions: 11 * 6 * 7cm

    Read the article

  • forward all ports via htaccess to new address

    - by user875933
    I have a chat server running on my local machine that listens to different ports. I want to use the sub-domain of one of my accounts to access it. I intend to manually change the redirect whenever my local machine gets a different ip address. So: chat.example.com:123 would redirect to dynamic.ip.address:123 I am trying to accomplish this with .htaccess and RewriteRule I have tried: RewriteEngine on RewriteRule ^(.*) http://dynamic.ip.address/ [L, R=302] but this doesn't work. When I try chat.example.com:123 nothing happens. When I input chat.example.com into the web browser, I get dynamic.ip.address Is .htaccess the right tool for this? I am using a simple web host that gives me ssh access, but not much more.

    Read the article

  • Announcing ASP.NET MVC 3 (Release Candidate 2)

    - by ScottGu
    Earlier today the ASP.NET team shipped the final release candidate (RC2) for ASP.NET MVC 3.  You can download and install it here. Almost there… Today’s RC2 release is the near-final release of ASP.NET MVC 3, and is a true “release candidate” in that we are hoping to not make any more code changes with it.  We are publishing it today so that people can do final testing with it, let us know if they find any last minute “showstoppers”, and start updating their apps to use it.  We will officially ship the final ASP.NET MVC 3 “RTM” build in January. Works with both VS 2010 and VS 2010 SP1 Beta Today’s ASP.NET MVC 3 RC2 release works with both the shipping version of Visual Studio 2010 / Visual Web Developer 2010 Express, as well as the newly released VS 2010 SP1 Beta.  This means that you do not need to install VS 2010 SP1 (or the SP1 beta) in order to use ASP.NET MVC 3.  It works just fine with the shipping Visual Studio 2010.  I’ll do a blog post next week, though, about some of the nice additional feature goodies that come with VS 2010 SP1 (including IIS Express and SQL CE support within VS) which make the dev experience for both ASP.NET Web Forms and ASP.NET MVC even better. Bugs and Perf Fixes Today’s ASP.NET MVC 3 RC2 build contains many bug fixes and performance optimizations.  Our latest performance tests indicate that ASP.NET MVC 3 is now faster than ASP.NET MVC 2, and that existing ASP.NET MVC applications will experience a slight performance increase when updated to run using ASP.NET MVC 3. Final Tweaks and Fit-N-Finish In addition to bug fixes and performance optimizations, today’s RC2 build contains a number of last-minute feature tweaks and “fit-n-finish” changes for the new ASP.NET MVC 3 features.  The feedback and suggestions we’ve received during the public previews has been invaluable in guiding these final tweaks, and we really appreciate people’s support in sending this feedback our way.  Below is a short-list of some of the feature changes/tweaks made between last month’s ASP.NET MVC 3 RC release and today’s ASP.NET MVC 3 RC2 release: jQuery updates and addition of jQuery UI The default ASP.NET MVC 3 project templates have been updated to include jQuery 1.4.4 and jQuery Validation 1.7.  We are also excited to announce today that we are including jQuery UI within our default ASP.NET project templates going forward.  jQuery UI provides a powerful set of additional UI widgets and capabilities.  It will be added by default to your project’s \scripts folder when you create new ASP.NET MVC 3 projects. Improved View Scaffolding The T4 templates used for scaffolding views with the Add-View dialog now generates views that use Html.EditorFor instead of helpers such as Html.TextBoxFor. This change enables you to optionally annotate models with metadata (using data annotation attributes) to better customize the output of your UI at runtime. The Add View scaffolding also supports improved detection and usage of primary key information on models (including support for naming conventions like ID, ProductID, etc).  For example: the Add View dialog box uses this information to ensure that the primary key value is not scaffold as an editable form field, and that links between views are auto-generated correctly with primary key information. The default Edit and Create templates also now include references to the jQuery scripts needed for client validation.  Scaffold form views now support client-side validation by default (no extra steps required).  Client-side validation with ASP.NET MVC 3 is also done using an unobtrusive javascript approach – making pages fast and clean. [ControllerSessionState] –> [SessionState] ASP.NET MVC 3 adds support for session-less controllers.  With the initial RC you used a [ControllerSessionState] attribute to specify this.  We shortened this in RC2 to just be [SessionState]: Note that in addition to turning off session state, you can also set it to be read-only (which is useful for webfarm scenarios where you are reading but not updating session state on a particular request). [SkipRequestValidation] –> [AllowHtml] ASP.NET MVC includes built-in support to protect against HTML and Cross-Site Script Injection Attacks, and will throw an error by default if someone tries to post HTML content as input.  Developers need to explicitly indicate that this is allowed (and that they’ve hopefully built their app to securely support it) in order to enable it. With ASP.NET MVC 3, we are also now supporting a new attribute that you can apply to properties of models/viewmodels to indicate that HTML input is enabled, which enables much more granular protection in a DRY way.  In last month’s RC release this attribute was named [SkipRequestValidation].  With RC2 we renamed it to [AllowHtml] to make it more intuitive: Setting the above [AllowHtml] attribute on a model/viewmodel will cause ASP.NET MVC 3 to turn off HTML injection protection when model binding just that property. Html.Raw() helper method The new Razor view engine introduced with ASP.NET MVC 3 automatically HTML encodes output by default.  This helps provide an additional level of protection against HTML and Script injection attacks. With RC2 we are adding a Html.Raw() helper method that you can use to explicitly indicate that you do not want to HTML encode your output, and instead want to render the content “as-is”: ViewModel/View –> ViewBag ASP.NET MVC has (since V1) supported a ViewData[] dictionary within Controllers and Views that enables developers to pass information from a Controller to a View in a late-bound way.  This approach can be used instead of, or in combination with, a strongly-typed model class.  The below code demonstrates a common use case – where a strongly typed Product model is passed to the view in addition to two late-bound variables via the ViewData[] dictionary: With ASP.NET MVC 3 we are introducing a new API that takes advantage of the dynamic type support within .NET 4 to set/retrieve these values.  It allows you to use standard “dot” notation to specify any number of additional variables to be passed, and does not require that you create a strongly-typed class to do so.  With earlier previews of ASP.NET MVC 3 we exposed this API using a dynamic property called “ViewModel” on the Controller base class, and with a dynamic property called “View” within view templates.  A lot of people found the fact that there were two different names confusing, and several also said that using the name ViewModel was confusing in this context – since often you create strongly-typed ViewModel classes in ASP.NET MVC, and they do not use this API.  With RC2 we are exposing a dynamic property that has the same name – ViewBag – within both Controllers and Views.  It is a dynamic collection that allows you to pass additional bits of data from your controller to your view template to help generate a response.  Below is an example of how we could use it to pass a time-stamp message as well as a list of all categories to our view template: Below is an example of how our view template (which is strongly-typed to expect a Product class as its model) can use the two extra bits of information we passed in our ViewBag to generate the response.  In particular, notice how we are using the list of categories passed in the dynamic ViewBag collection to generate a dropdownlist of friendly category names to help set the CategoryID property of our Product object.  The above Controller/View combination will then generate an HTML response like below.    Output Caching Improvements ASP.NET MVC 3’s output caching system no longer requires you to specify a VaryByParam property when declaring an [OutputCache] attribute on a Controller action method.  MVC3 now automatically varies the output cached entries when you have explicit parameters on your action method – allowing you to cleanly enable output caching on actions using code like below: In addition to supporting full page output caching, ASP.NET MVC 3 also supports partial-page caching – which allows you to cache a region of output and re-use it across multiple requests or controllers.  The [OutputCache] behavior for partial-page caching was updated with RC2 so that sub-content cached entries are varied based on input parameters as opposed to the URL structure of the top-level request – which makes caching scenarios both easier and more powerful than the behavior in the previous RC. @model declaration does not add whitespace In earlier previews, the strongly-typed @model declaration at the top of a Razor view added a blank line to the rendered HTML output. This has been fixed so that the declaration does not introduce whitespace. Changed "Html.ValidationMessage" Method to Display the First Useful Error Message The behavior of the Html.ValidationMessage() helper was updated to show the first useful error message instead of simply displaying the first error. During model binding, the ModelState dictionary can be populated from multiple sources with error messages about the property, including from the model itself (if it implements IValidatableObject), from validation attributes applied to the property, and from exceptions thrown while the property is being accessed. When the Html.ValidationMessage() method displays a validation message, it now skips model-state entries that include an exception, because these are generally not intended for the end user. Instead, the method looks for the first validation message that is not associated with an exception and displays that message. If no such message is found, it defaults to a generic error message that is associated with the first exception. RemoteAttribute “Fields” -> “AdditionalFields” ASP.NET MVC 3 includes built-in remote validation support with its validation infrastructure.  This means that the client-side validation script library used by ASP.NET MVC 3 can automatically call back to controllers you expose on the server to determine whether an input element is indeed valid as the user is editing the form (allowing you to provide real-time validation updates). You can accomplish this by decorating a model/viewmodel property with a [Remote] attribute that specifies the controller/action that should be invoked to remotely validate it.  With the RC this attribute had a “Fields” property that could be used to specify additional input elements that should be sent from the client to the server to help with the validation logic.  To improve the clarity of what this property does we have renamed it to “AdditionalFields” with today’s RC2 release. ViewResult.Model and ViewResult.ViewBag Properties The ViewResult class now exposes both a “Model” and “ViewBag” property off of it.  This makes it easier to unit test Controllers that return views, and avoids you having to access the Model via the ViewResult.ViewData.Model property. Installation Notes You can download and install the ASP.NET MVC 3 RC2 build here.  It can be installed on top of the previous ASP.NET MVC 3 RC release (it should just replace the bits as part of its setup). The one component that will not be updated by the above setup (if you already have it installed) is the NuGet Package Manager.  If you already have NuGet installed, please go to the Visual Studio Extensions Manager (via the Tools –> Extensions menu option) and click on the “Updates” tab.  You should see NuGet listed there – please click the “Update” button next to it to have VS update the extension to today’s release. If you do not have NuGet installed (and did not install the ASP.NET MVC RC build), then NuGet will be installed as part of your ASP.NET MVC 3 setup, and you do not need to take any additional steps to make it work. Summary We are really close to the final ASP.NET MVC 3 release, and will deliver the final “RTM” build of it next month.  It has been only a little over 7 months since ASP.NET MVC 2 shipped, and I’m pretty amazed by the huge number of new features, improvements, and refinements that the team has been able to add with this release (Razor, Unobtrusive JavaScript, NuGet, Dependency Injection, Output Caching, and a lot, lot more).  I’ll be doing a number of blog posts over the next few weeks talking about many of them in more depth. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Explain Model View Controller

    - by Channel72
    My experience with developing dynamic websites is limited mostly to Java servlets. I've used Tomcat to develop various Java servlets, and I wouldn't hesitate to say that I'm reasonably proficient with this technology, as well as with client-side HTML/CSS/Javascript for the front-end. When I think "dynamic website", I think: user requests a URL with a query string, server receives the query, and then proceeds to output HTML dynamically in order to respond to the query. This often involves communication with a database in order to fetch requested data for display. This is basically the idea behind the doGet method of a Java HttpServlet. But these days, I'm hearing more and more about newer frameworks such as Django and Ruby on Rails, all of which take advantage of the "Model View Controller" architecture. I've read various articles which explain MVC, but I'm having trouble really understanding the benefits. I understand that the general idea is to separate business logic from UI logic, but I fail to see how this is anything really different from normal web programming. Web programming, by it's very nature, forces you to separate business logic (back-end server-side programming) from UI programming (client-side HTML or Javascript), because the two exist in entirely different spheres of programming. Question: What does MVC offer over something like a Java servlet, and more importantly, what exactly is MVC and how is it different from what you would normally do to develop a dynamic website using a more traditional approach such as a Java servlet (or even something older like CGI). If possible, when explaining MVC, please provide an example which illustrates how MVC is applied to the web development process, and how it is beneficial.

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 28 (sys.dm_db_stats_properties)

    - by Tamarick Hill
    The sys.dm_db_stats_properties Dynamic Management Function returns information about the statistics that are currently on your database objects. This function takes two parameters, an object_id and a stats_id. Let’s have a look at the result set from this function against the AdventureWorks2012.Sales.SalesOrderHeader table. To obtain the object_id and stats_id I will use a CROSS APPLY with the sys.stats system table. SELECT sp.* FROM sys.stats s CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.Stats_id) sp WHERE sp.object_id = object_id('Sales.SalesOrderHeader') The first two columns returned by this function are the object_id and the stats_id columns. The next column, ‘last_updated’, gives you the date and the time that a particular statistic was last updated. The next column, ‘rows’, gives you the total number of rows in the table as of the last statistic update date. The ‘rows_sampled’ column gives you the number of rows that were sampled to create the statistic. The ‘steps’ column represents the number of specific value ranges from the statistic histogram. The ‘unfiltered_rows’ column represents the number of rows before any filters are applied. If a particular statistic is not filtered, the ‘unfiltered_rows’ column will always equal the ‘rows’ column. Lastly we have the ‘modification_counter’ column which represents the number of modification to the leading column in a given statistic since the last time the statistic was updated. Probably the most important column from this Dynamic Management Function is the ‘last_updated’ column. You want to always ensure that you have accurate and updated statistics on your database objects. Accurate statistics are vital for the query optimizer to generate efficient and reliable query execution plans. Without accurate and updated statistics, the performance of your SQL Server would likely suffer. For more information about this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/jj553546.aspx Folllow me on Twitter @PrimeTimeDBA

    Read the article

  • Changing the connection factory JNDI dynamically in Ftp Adapter

    - by [email protected]
    Consider a usecase where you need to send the same file over to five different ftp servers. The first thought that might come to mind is to create five FtpAdapter references one for each connection-factory location. However, this is not the most optimal approach and this is exactly where "Dynamic Partner Links" come into play in 11g.    If you're running the adapter in managed mode, it would require you to configure the connection factory JNDI in the appserver console for the FtpAdapter. In the sample below, I have mapped the connection-factory JNDI location "eis/Ftp/FtpAdapter" with the ftp server running on localhost.           After you've configured the connection factory on your appserver, you will need to refer to the connection-factory JNDI in the jca artifact of your SCA process. In the example below, I've instructed the FTPOut reference to use the ftp server corresponding to "eis/Ftp/FtpAdapter".     The good news is that you can change this connection-factory location dynamically using jca header properties in both BPEL as well as Mediator service engines. In order to do so, the business scenario involving BPEL or Mediator would be required to use a reserved jca header property "jca.jndi" as shown below.     Similarly, for mediator, the mplan would look as shown below.       Things to remember while using dynamic partner links: 1) The connection factories must be pre-configured on the SOA server. In our BPEL example above, both "eis/Ftp/FtpAdater1" and "eis/Ftp/FtpAdater2" must be configured in the weblogic deployment descriptor for the FtpAdapter prior to deploying the scenario. 2) Dynamic Partner Links are applicable to outbound invocations only.    

    Read the article

  • SharePoint Unit Testing and Load Testing Finally?

    - by Kit Ong
    It has always been a real pain to incorporate extensive SharePoint Unit Testing and Load Testing in a project, could Visual Studio 2012 finally make this easier? It certaining looks like it, here's a brief overview on SharePoint support in Visual Studio 2012. Load testing – We now support load testing for SharePoint out of the box. This is more involved than you might imagine due to how dynamic SharePoint is. You can’t just record a script and play it back – it won’t work because SharePoint generates and expects dynamic data (like GUIDs). We’ve built the extensions to our load testing solution to parse the dynamic SharePoint data and include it appropriately in subsequent requests. So now you can record a script and play it back and we will dynamically adjust it to match what SharePoint expects.Unit testing – One of the big problems with unit testing SharePoint is that most code requires SharePoint to be running and trying to run tests against a live SharePoint instance is a pain. So we’ve built a SharePoint “emulator” using our new VS 2012 Fakes & Stubs capability. This will make unit testing of SharePoint components WAY easier.Read more in the link belowhttp://blogs.msdn.com/b/bharry/archive/2012/09/12/visual-studio-update-this-fall.aspx

    Read the article

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • Converting a bounded knapsack problem to 0/1 knapsack problem

    - by Ants
    I ran across a problem where goal was to use dynamic programming (instead of other approaches). There is a distance to be spanned, and a set of cables of different lengths. What is the minimum number of cables needed to span the distance exactly? To me this looked like a knapsack problem, but since there could be multiples of a particular length, it was a bounded knapsack problem, rather than a 0/1 knapsack problem. (Treat the value of each item to be its weight.) Taking the naive approach (and not caring about the expansion of the search space), the method I used to convert the bounded knapsack problem into a 0/1 knapsack problem, was simply break up the multiples into singles and apply the well-known dynamic programming algorithm. Unfortunately, this leads to sub-optimal results. For example, given cables: 1 x 10ft, 1 x 7ft, 1 x 6ft, 5 x 3ft, 6 x 2ft, 7 x 1ft If the target span is 13ft, the DP algorithm picks 7+6 to span the distance. A greedy algorithm would have picked 10+3, but it's a tie for minimum number of cables. The problem arises, when trying to span 15ft. The DP algorithm ended up picking 6+3+3+3 to get 4 cables, while the greedy algorithm correctly picks 10+3+2 for only 3 cables. Anyway, doing some light scanning of converting bounded to 0/1, it seems like the well-known approach to convert multiple items to { p, 2p, 4p ... }. My question is how does this conversion work if p+2p+4p does not add up to the number of multiple items. For example: I have 5 3ft cables. I can't very well add { 3, 2x3, 4x3 } because 3+2x3+4x3 5x3. Should I add { 3, 4x3 } instead? [I'm currently trying to grok the "Oregon Trail Knapsack Problem" paper, but it currently looks like the approach used there is not dynamic programming.]

    Read the article

  • Static pages for large photo album

    - by Phil P
    I'm looking for advice on software for managing a largish photo album for a website. 2000+ pictures, one-time drop (probably). I normally use MarginalHack's album, which does what I want: pre-generate thumbnails and HTML for the pictures, so I can serve without needing a dynamic run-time, so there's less attack surface to worry about. However, it doesn't handle pagination or the like, so it's unwieldy for this case. This is a one-time drop for pictures from a wedding, with a shared usercode/password for distribution to the guests; I don't wish to put the pictures in a third-party hosting environment. I don't wish to use PHP, simply because that's another run-time to worry about, I might relent and use something dynamic if it's Python or Perl based (as I can maintain things written in those). I currently have: Apache serving static files, Album-generated, some sub-directories to divide up the content to be a little more manageable. Something like Album but with pagination already handled would be great, but I'm willing to have something a little more dynamic, if it lets people comment or caption and store the extra data in something like an sqlite DB. I'd want something light-weight, not a full-blown CMS with security updates every three months. I don't want to upload pictures of other peoples' children into a third-party free service where I don't know what the revenue model is. (For my site: revenue is none, costs out of pocket). Existing server hosting is *nix, Apache, some WSGI. Client-side I have MacOS. Any advice?

    Read the article

  • Should I redo an abandoned project with Lightswitch?

    - by Elson
    I had a small project that I was doing on the side. It was basically a couple of forms linked to a DB. Access was out, because it was a specifically meant to be a web application. Being a small project I used ASP.NET Dynamic Data, but, for various reasons, the project ended before deployment. I met the client recently, and he said there was a need for it still. I'm considering restarting the project with Dynamic Data, but I've seen some Lightswitch demos, and was suitably impressed with the BETA. I will wait for RTM if I use it, but is it a good idea to use Lightswitch to replace the Dyanmic Data? The amount of work I put into the Dynamic Data site isn't really an issue. Additional information: It's a system that tracks production in a small factory, broken down by line, machine, section and will generate reports. I would guess that the data structure will remain fairly constant over time, but that the reporting requirements will grow. The other thing is that the factory is part of a larger group, and I'm hopeful that, if this system succeeds, similar work with be forthcoming for other factories.

    Read the article

  • Upgraded Ubuntu, all drives in one zpool marked unavailable

    - by Matt Sieker
    I just upgraded Ubuntu 14.04, and I had two ZFS pools on the server. There was some minor issue with me fighting with the ZFS driver and the kernel version, but that's worked out now. One pool came online, and mounted fine. The other didn't. The main difference between the tool is one was just a pool of disks (video/music storage), and the other was a raidz set (documents, etc) I've already attempted exporting and re-importing the pool, to no avail, attempting to import gets me this: root@kyou:/home/matt# zpool import -fFX -d /dev/disk/by-id/ pool: storage id: 15855792916570596778 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://zfsonlinux.org/msg/ZFS-8000-5E config: storage UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas ata-SAMSUNG_HD103SJ_S246J90B134910 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 UNAVAIL The symlinks for those in /dev/disk/by-id also exist: root@kyou:/home/matt# ls -l /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910* /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51* lrwxrwxrwx 1 root root 9 May 27 19:31 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910 -> ../../sdb lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part9 -> ../../sdb9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 -> ../../sdd lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part9 -> ../../sdd9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 -> ../../sde lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1 -> ../../sde1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part9 -> ../../sde9 Inspecting the various /dev/sd* devices listed, they appear to be the correct ones (The 3 1TB drives that were in a raidz array). I've run zdb -l on each drive, dumping it to a file, and running a diff. The only difference on the 3 are the guid fields (Which I assume is expected). All 3 labels on each one are basically identical, and are as follows: version: 5000 name: 'storage' state: 0 txg: 4 pool_guid: 15855792916570596778 hostname: 'kyou' top_guid: 1683909657511667860 guid: 8815283814047599968 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 1683909657511667860 nparity: 1 metaslab_array: 33 metaslab_shift: 34 ashift: 9 asize: 3000569954304 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 8815283814047599968 path: '/dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 18036424618735999728 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 10307555127976192266 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1' whole_disk: 1 create_txg: 4 features_for_read: Stupidly, I do not have a recent backup of this pool. However, the pool was fine before reboot, and Linux sees the disks fine (I have smartctl running now to double check) So, in summary: I upgraded Ubuntu, and lost access to one of my two zpools. The difference between the pools is the one that came up was JBOD, the other was zraid. All drives in the unmountable zpool are marked UNAVAIL, with no notes for corrupted data The pools were both created with disks referenced from /dev/disk/by-id/. Symlinks from /dev/disk/by-id to the various /dev/sd devices seems to be correct zdb can read the labels from the drives. Pool has already been attempted to be exported/imported, and isn't able to import again. Is there some sort of black magic I can invoke via zpool/zfs to bring these disks back into a reasonable array? Can I run zpool create zraid ... without losing my data? Is my data gone anyhow?

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >