Search Results

Search found 11819 results on 473 pages for 'parameter lists'.

Page 247/473 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • SANS Mobility Policy Survey Webcast follow up

    - by Darin Pendergraft
    Hello Everyone!  If you missed the SANS mobility survey webcast on October 23 - here is a link to the replay and to the slides: [Warning -  you have to register to see the replay and to get the slides] https://www.sans.org/webcasts/byod-security-lists-policies-mobility-policy-management-survey-95429 The webcast had a lot of great information about how organizations are setting up and managing their mobile access policies.  Here are a couple of key takeaways: 1.  Who is most concerned about mobile access policy? Security Analysts >> CISOs >> CIOs - the focus is coming from the risk and security office - so what does that mean for the IT teams? 2. How important is mobile policy? 77% said "Critical" or "Extremely Important" - so this means mobile access policies will get a lot of attention.  3. When asked about the state of their mobile policies: Over 35% said they didn't have a mobile access policy and another 35% said they simply ask their employees to sign a usage agreement.  So basically ~70% of the respondents were not actively managing or monitoring mobile access. Be sure to watch the webcast replay for all of the details. Box, Oracle and RSA were all co-sponsors of the survey and webcast and all were invited to give a brief presentation at the end.

    Read the article

  • REST API wrapper - class design for 'lite' object responses

    - by sasfrog
    I am writing a class library to serve as a managed .NET wrapper over a REST API. I'm very new to OOP, and this task is an ideal opportunity for me to learn some OOP concepts in a real-life situation that makes sense to me. Some of the key resources/objects that the API returns are returned with different levels of detail depending on whether the request is for a single instance, a list, or part of a "search all resources" response. This is obviously a good design for the REST API itself, so that full objects aren't returned (thus increasing the size of the response and therefore the time taken to respond) unless they're needed. So, to be clear: .../car/1234.json returns the full Car object for 1234, all its properties like colour, make, model, year, engine_size, etc. Let's call this full. .../cars.json returns a list of Car objects, but only with a subset of the properties returned by .../car/1234.json. Let's call this lite. ...search.json returns, among other things, a list of car objects, but with minimal properties (only ID, make and model). Let's call this lite-lite. I want to know what the pros and cons of each of the following possible designs are, and whether there is a better design that I haven't covered: Create a Car class that models the lite-lite properties, and then have each of the more detailed responses inherit and extend this class. Create separate CarFull, CarLite and CarLiteLite classes corresponding to each of the responses. Create a single Car class that contains (nullable?) properties for the full response, and create constructors for each of the responses which populate it to the extent possible (and maybe include a property that returns the response type from which the instance was created). I expect among other things there will be use cases for consumers of the wrapper where they will want to iterate through lists of Cars, regardless of which response type they were created from, such that the three response types can contribute to the same list. Happy to be pointed to good resources on this sort of thing, and/or even told the name of the concept I'm describing so I can better target my research.

    Read the article

  • How to make the internal subwoofer work on an Asus G73JW?

    - by CodyLoco
    I have an Asus G73JW laptop which has an internal subwoofer built-in. Currently, the system detects the internal speakers as a 2.0 system (or I can change do 4.0 is the only other option). I found a bug report here: https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/673051 which discusses the bug and according to them a fix was sent upstream back at the end of 2010. I would have thought this would have made it into 12.04 but I guess not? I tried following the link given at the very bottom to install the latest ALSA drivers, here: https://wiki.ubuntu.com/Audio/InstallingLinuxAlsaDriverModules however I keep running into an error when trying to install: sudo apt-get install linux-alsa-driver-modules-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-alsa-driver-modules-3.2.0-24-generic E: Couldn't find any package by regex 'linux-alsa-driver-modules-3.2.0-24-generic' I believe I have added the repository correctly: sudo add-apt-repository ppa:ubuntu-audio-dev/ppa [sudo] password for codyloco: You are about to add the following PPA to your system: This PPA will be used to provide testing versions of packages for supported Ubuntu releases. More info: https://launchpad.net/~ubuntu-audio-dev/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.7apgZoNrqK --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 4E9F485BF943EF0EABA10B5BD225991A72B194E5 gpg: requesting key 72B194E5 from hkp server keyserver.ubuntu.com gpg: key 72B194E5: public key "Launchpad Ubuntu Audio Dev team PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) And I also ran an update as well (followed the instructions on the fix above). Any ideas?

    Read the article

  • Which metric/list should be used to evaluate whole software development team?

    - by adt
    Title might be seem vague, so let me tell you a little bit history what i am trying to clarify question. I have been hired as a consultant for a corporate's small developement divison ( The company also owns a couple of software dev. companies) My ex manager runs a BI team, with reportes, analyts and developers. He asked me to evaluate overall design, software developement process and code quality . Here what i found, Lots of copy/paste code everywhere ( no reuse ) Even though they have everything TFS, VS Ultimate etc, No Build process , No Cruise Control.net / team city... No unit tests Web Pages with 3700 lines of code, Lots of huge functions ( which can be divided into smaller one's ) No naming convention both db and c# code No 3r party or open source project No IoC No Seperation Of Concerns No Code Quality Check ( NDepend or FxCope or nothing ) No Code Review No Communication within the team They claim they wrote an application framework ( 6 months 3 persons), but I would hardly call a framework ( of course no unit test, there are some but all commented out). Framework contains 14 projects but there are some projects with 1 file 20 lines of code . Honestly, what people are doing fixing bug all thr day( which will provide more bugs eventually), they are kind of isolated from community, some team members even dont know github or stackoverflow they probably went there with google but they dont know about it. So here is question, Is This list ok ? Or am i being picky? Since I dont have any grudge against them, I just want to be fair, honest and I would like to hear you suggestions, before I would submit this list. And since this list also will be review by software division's manager, I dont want any heart break or something like this. http://www.hanselman.com/altnetgeekcode/ For example I would love to such lists, i cant make references. Thanks in advance.

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work? [closed]

    - by themoondothshine
    I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. An application is linked against libsome1.so. This application uses libdl.so to dynamically load another module, say libmagic.so. Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2), but nothing seems to work. What am I doing wrong?

    Read the article

  • how to solve this problem

    - by Surbir
    root@me-desktop:~# sudo apt-get install aircrack-ng Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: aircrack-ng 0 upgraded, 1 newly installed, 0 to remove and 446 not upgraded. 1 not fully installed or removed. Need to get 1,579kB of archives. After this operation, 2,843kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick/universe aircrack-ng i386 1:1.1-1 [1,579kB] Fetched 1,579kB in 1min 9s (22.7kB/s) Selecting previously deselected package aircrack-ng. (Reading database ... 520739 files and directories currently installed.) Unpacking aircrack-ng (from .../aircrack-ng_1%3a1.1-1_i386.deb) ... Processing triggers for man-db ... Setting up linux-image-3.0.1-030001-generic (3.0.1-030001.201108060905) ... Running depmod. update-initramfs: Generating /boot/initrd.img-3.0.1-030001-generic Warning: No support for locale: en_US.utf8 Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic run-parts: executing /etc/kernel/postinst.d/nvidia-common 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic exec: 15: update-grub: not found run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.0.1-030001-generic.postinst line 1010. dpkg: error processing linux-image-3.0.1-030001-generic (--configure): subprocess installed post-installation script returned error exit status 2 Setting up aircrack-ng (1:1.1-1) ... Errors were encountered while processing: linux-image-3.0.1-030001-generic E: Sub-process /usr/bin/dpkg returned an error code (1) root@me-desktop:~#

    Read the article

  • Ubuntu Dependency Problem in Activity log Manager

    - by Incredible
    incredible@incredible-Inspiron-N5010:~$ sudo apt-get -f install [sudo] password for incredible: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: activity-log-manager The following packages will be upgraded: activity-log-manager 1 upgraded, 0 newly installed, 0 to remove and 287 not upgraded. 1 not fully installed or removed. Need to get 0 B/60.3 kB of archives. After this operation, 29.7 kB disk space will be freed. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of activity-log-manager: activity-log-manager depends on activity-log-manager-common (= 0.9.4-0ubuntu3); however: Version of activity-log-manager-common on system is 0.9.4-0ubuntu3.1. activity-log-manager-control-center (0.9.4-0ubuntu3.1) breaks activity-log-manager (<< 0.9.4-0ubuntu3.1) and is installed. Version of activity-log-manager to be configured is 0.9.4-0ubuntu3. dpkg: error processing activity-log-manager (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: activity-log-manager E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • "Unmet Dependencies" problem when trying apt-get install

    - by GChorn
    Anytime I try to install python packages using the command: sudo apt-get install python-package I get the following output: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-headers-generic : Depends: linux-headers-3.2.0-36-generic but it is not going to be installed linux-headers-generic-pae : Depends: linux-headers-3.2.0-36-generic-pae but it is not going to be installed linux-image-generic : Depends: linux-image-3.2.0-36-generic but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). This seems to have started when these same three packages showed up in Ubuntu's Update Manager and kicked an error when I tried to install them there. Based on the suggestion in the output above, I tried running: sudo apt-get -f install But this only gave me several instances of the following error: dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-36-generic_3.2.0-36.57_i386.deb (--unpack): unable to create `/lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko.dpkg-new' (while processing `./lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko'): No space left on device Now maybe I'm way off-base here, but I'm wondering if the error could be coming from the "No space left on device" part? The thing is, I'm running Ubuntu as a VirtualBox VM but I've got it set to dynamically increase its virtual hard drive space as needed, so why am I still getting this error? Here's my output when I use dh -f: Filesystem Size Used Avail Use% Mounted on /dev/sda1 6.9G 5.7G 869M 88% / udev 494M 4.0K 494M 1% /dev tmpfs 201M 784K 200M 1% /run none 5.0M 0 5.0M 0% /run/lock none 501M 76K 501M 1% /run/shm VB_Shared_Folder 466G 271G 195G 59% /media/sf_VB_Shared_Folder When I perform sudo apt-get -f install and the system says, After this operation, 192 MB of additional disk space will be used. Does that mean 192 MB of my virtual machine's current memory, or 192 MB on top of the rest of my free space? As I said, my machine normally dynamically allocates additional memory from the host machine, so I don't see why there would be memory restrictions at all...

    Read the article

  • How to reset a List c# and XNA [on hold]

    - by P3erfect
    I need to do a "retry" option when the player finishes the game.For doing this I thought to reset the lists of Monsters and other objects that moved at the first playing or which have been "killed".for example I have a list like that: //the enemy1 class is already done // in Game1 I declare it List<enemy1> enem1 = new List<enemy1>(); //Initialize method List<enemy1> enem1 = new List<enemy1>(); //LoadContent foreach (enemy1 enemy in enem1) { enemy.Load(Content); } enem1.Add(new enemy1(Content.Load<Texture2D>("enemy"), new Vector2(5900, 12600))); //Update foreach (enemy1 enemy in enem1) { enemy.Update(gameTime); } //after being shoted the enemies disappear and i remove them //if the monsters are shoted the bool "visible" goes from false to true for (int i = enem1.Count - 1; i >= 0; --i) { if (enem1[i].visible == true) enem1.RemoveAt(i); } //Draw foreach (enemy1 enemy in enem1) { if(enemy.visble==false) { enemy.Draw(spriteBatch, gameTime); } } //So my problem is to restart the game. I did this in Update method if(lost==false) { //update all the things... } if(lost==true)//this is if I die { //here I have to put the code that restore the list //I tried: foreach (enemy1 enemy in enem1) { enemy.visible=false; } player.life=3;//initializing the player,points,time player.position=initialPosition; points=0; time=0; }//the player works.. } } they should be drawn again but if I removed them they won't be drawn anymore.If I don't remove them ,instead, the enemies are in different places (because they follow me). Any suggestions to restore or reinitialize the list??

    Read the article

  • Why is Wine not installable on my system?

    - by RawX
    So I upgraded on a fresh install to Ubuntu 12.10, not beta, and I've tried installing wine many many times now many different ways but the jist of it is this: This error could be caused by required additional software packages which are missing or not installable. Furthermore there could be a conflict between software packages which are not allowed to be installed at the same time. The following packages have unmet dependencies: wine: Any ideas? It won't let me install the dependencies either it says it needs another set of dependencies to install them -_- im on asus kj50 64bit os dual boot with windows 7 sudo apt-get install wine1.5 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine1.5 : Depends: wine1.5-i386 (= 1.5.15-0ubuntu1) but it is not installable Recommends: gnome-exe-thumbnailer but it is not going to be installed or kde-runtime but it is not going to be installed Recommends: ttf-droid Recommends: ttf-mscorefonts-installer but it is not going to be installed Recommends: ttf-umefont but it is not going to be installed Recommends: ttf-unfonts-core but it is not going to be installed Recommends: winbind but it is not going to be installed Recommends: winetricks but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • Multisystem Script won't work! "Syntax error:redirection unexpected" Worked 2 days ago?

    - by user74005
    this is my first question. I use Multisystem all of the time and have installed it on both Kubuntu and Ubuntu and have used it with no issues. I wiped my hard drive to try some new OSs I'm now using the exact same OS (Ubuntu 12.05) I used to load my USB stick to begin with and now I'm getting this ridiculous syntax error. I know the script is correct, I'm following the exact same steps I used to get to this point and I'm getting different results ?!?! I'm very confused by this. I have no clue how to begin addressing this issue. I get the same syntax error on Kubuntu now too, which did have multisystem installed. I run "sh install-depot-multisystem.sh" and get "Syntax error:redirection unexpected", this worked literally 2 days ago. The only thing that has changed is my face has grown some more facial hair and my head hurts from bangin it against the wall over this issue. The OS is exactly the same, the script is the same; but now it won't install. I'm lost and really hoping someone can help. Append Just to append to this a bit https://lists.ubuntu.com/archives/ub...er/000264.html I needed to do a chmod 777 on the script, I'm still getting a syntax error on Kubuntu...but it did install successfully. I'll mark this as resovled! Thanks anyway, I'll try to spruse up on my Linux skills.

    Read the article

  • Not able to install LTT tool in 10.04

    - by Ashoka
    I have tried the below commands to install LTT on VMware running Ubuntu 10.04, but got the following error. Please help. From link : https://launchpad.net/~lttng/+archive/ppa $ sudo apt-add-repository ppa:lttng/ppa $ sudo apt-get update $ sudo apt-get install lttng-tools lttng-modules-dkms babeltrace user@usr:~$ sudo apt-add-repository ppa:lttng/ppa Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv C541B13BD43FA44A287E4161F4A7DFFC33739778 gpg: requesting key 33739778 from hkp server keyserver.ubuntu.com gpg: unable to execute program `/usr/local/libexec/gnupg/gpgkeys_curl': No such file or directory gpg: no handler for keyserver scheme `hkp' gpg: keyserver receive failed: keyserver error user@usr:~$ user@usr:~$ sudo apt-get update ..... $ user@usr:~$ sudo apt-get install lttng-tools lttng-modules-dkms babeltrace Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package lttng-tools user@usr:~$ My systemm details: user@usr:~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.4 LTS" user@usr:~$ user@usr:~$ user@usr:~$ uname -a Linux usr 2.6.32-42-generic #95-Ubuntu SMP Wed Jul 25 15:57:54 UTC 2012 i686 GNU/Linux user@usr:~$ Regards, Ashoka

    Read the article

  • When uninstalling all GRUB packages for EC2 AMI build script, how do I bypass prompting?

    - by Skaperen
    When I try to uninstall all GRUB packages from the cloud-image version of Ubuntu I use in AWS EC2, I get an interactive prompt. I need put this all in a script under automation, so prompts need to be avoided. I have searched for means to bypass this, and all of the features like debconf apply to prompts for config settings which can be provided in advance. But the prompt I am getting is not a config prompt. It would be in the class of "are you sure" prompts. I tried --force-all and that did not work. The prompt is referring to the files in /boot/grub so I tried by first doing "rm -fr /boot/grub" (they do all go away) and it still prompts to ask me if I want to delete them even though there is nothing to delete. The wording of the prompt is: Do you want to have all GRUB 2 files removed from /boot/grub? This will make the system unbootable unless another boot loader is installed. Remove GRUB 2 from /boot/grub? <YES> <NO> The command I am doing is: dpkg --purge grub-common grub-gfxpayload-lists grub-legacy-ec2 grub-pc grub-pc-bin grub2-common How can I get past this prompt without trying to encapsulate it in something that tries to answer (which itself has issues when there is no TTY so I want to avoid that) ?

    Read the article

  • Wine not working ubuntu 12.10

    - by RawX
    So I upgraded on a fresh install to ubuntu 12.10, not beta, and I've tried installing wine many mannnnyy times now many different ways but the jist of it is this "This error could be caused by required additional software packages which are missing or not installable. Furthermore there could be a conflict between software packages which are not allowed to be installed at the same time. The following packages have unmet dependencies: wine: " any ideas? it wont let me install the dependencies either it says it needs another set of dependencies to install them -_- im on asus kj50 64bit os dual boot with windows 7 sudo apt-get install wine1.5 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine1.5 : Depends: wine1.5-i386 (= 1.5.15-0ubuntu1) but it is not installable Recommends: gnome-exe-thumbnailer but it is not going to be installed or kde-runtime but it is not going to be installed Recommends: ttf-droid Recommends: ttf-mscorefonts-installer but it is not going to be installed Recommends: ttf-umefont but it is not going to be installed Recommends: ttf-unfonts-core but it is not going to be installed Recommends: winbind but it is not going to be installed Recommends: winetricks but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • Problem installing wine on Ubuntu 12.10

    - by akhumaini
    I am having problem when installing wine on ubuntu 12.10 I tried to following some tutorial but none of them work. **root@ubuntu:~# add-apt-repository ppa:ubuntu-wine/ppa** **root@ubuntu:~# apt-get update** **root@ubuntu:~# apt-get install wine1.5** Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine1.5 : Depends: wine1.5-i386 (= 1.5.16-0ubuntu1) but it is not installable Recommends: ttf-droid Recommends: ttf-mscorefonts-installer but it is not going to be installed Recommends: ttf-umefont but it is not going to be installed Recommends: ttf-unfonts-core but it is not going to be installed Recommends: winbind but it is not going to be installed Recommends: winetricks but it is not going to be installed E: Unable to correct problems, you have held broken packages. root@ubuntu:~# Anyone can help me how to solve th problem ?

    Read the article

  • Dual Booting WIndows 7 and Ubuntu 12.04

    - by mfes
    I have Windows 7 32 bit installed on a 64 bit Laptop. I am trying to install Ubuntu 12.04.I am trying to install Ubuntu 12.04 using USB. I created the bootable usb using live usb installer. I have 6 partitions in Windows 7 of which in Windows 7, I created a partition of 50 GB and also having an unallocated space of 50 Gb. I will install Ubuntu 12.04 in any one of these space. When I am booting from USB and selecting the option of "Try Ubuntu" and In Ubuntu desktop, when I click "Install Ubuntu 12.04" , and choosing the option as " Something Else " ( I dont want to Install Ubuntu alongside Windows or want to erase Windows totally ), only 3 paritions getting listed. One is system memory reserved as 100 mb. Second is C drive and third partition is listed as a whole space ie remaining space of the entire hard disk is listed instead of partitions. I tried Gparted Editor and it also lists the same as Ubuntu 12.04 So whats the problem and how can I make the Ubuntu to detect all the partitions so that I can install in the unallocated space or in the 50 GB Partition ? ( P.S - When I try the same in my 32 bit desktop, it is detecting all the partitions )

    Read the article

  • Best practice for storing information from a php script for future use

    - by tRudgeF3llow
    My employer uses forms to help people search for products. The product lists can change from time to time and the forms need to be updated again. The product information can be accessed through a third party API which I started tinkering with, I've recently built a script that retrieves the information with PHP and creates and populates a form dynamically with Javascript. So far so good, but... There are limitations to the API, mainly it can only be accessed a certain number of times per hour, it is probably more than my form/script would use but I want to create a script that is minimally intrusive. My main question is... What is the best practice for accessing the information once and storing it long enough to let the API reset? I was wondering about creating a cookie but there is the possibility of users that have them disabled. (Also, I am doing this as a personal project but I like the people I work for and I think this would help them out.) Thanks in advance.

    Read the article

  • How to avoid tons of `instanceof` in collision detection?

    - by Prog
    Consider a simple game with 4 kinds of entities: Robots, Dogs, Missiles, Walls. Here's a simple collision-detection mechanism in psuedocode: (I know, O(n^2). Irrelevant for this question). for(Entity entityA in entities){ for(Entity entityB in entities){ if(collision(entityA, entityB)){ if(entityA instanceof Robot && entityB instanceof Dog) entityB.die(); if(entityA instanceof Robot && entityB instanceof Missile){ entityA.die(); entityB.die(); } if(entityA instanceof Missile && entityB instanceof Wall) entityB.die(); // .. and so on } } } Obviously this is very ugly, and will get bigger and harder to maintain the more entities there are, and the more conditions there are. One option to make this better is to have separate lists for each kind of entity. For example a Robots list, a Dogs list etc. And than check for collisions of all Robots with Dogs, and all Dogs with Walls, etc. This is better, but I still don't think it's good. So my question is: The collision detection system spotted a collision. Now what? What is the common way to react to the collision? Should the system notify the entity itself that it collided with something, and have it decide for itself how to react? E.g. entityA.reactToCollision(entityB). Or is there some other solution?

    Read the article

  • Rsync: how to mount truecrypt on-the-fly on the receiving side?

    - by deepc
    The short version: how can I keep an rsync backup on a truecrypt volume? The hard part is to mount/unmount this volume on the fly when it is needed for rsync. Details This is my current backup configuration (which works fairly well for the most part): backup source is on Win7 64 bit, destination is a remote Linux box (Debian) actual data transfer is done by rsync via ssh (cwRsync with cygwin) rsync daemon is started on demand via ssh On the Linux box the backup is protected by file permissions only. I want to increase security here and put the backup into a truecrypt volume. I can fuse-mount that volume manually in the shell. The question is now how can I make rsync not only open an ssh connection and starting the rsync daemon, but also to mount the truecrypt volume before (and unmount it after)? My money is on option --rsync-path which can be used to pass a command line to ssh - provided that stdin and stdout still work the same. I guess that command would have to be a shell script. Is this possible, and what would the script look like? For reference, here's a quote of that option: --rsync-path=PROGRAM Use this to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell's path (e.g. --rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you'd care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate. One tricky example is to set a different default directory on the remote machine for use with the --relative option. For instance: rsync -avR --rsync-path="cd /a/b && rsync" host:c/d /e/ This is the full rsync man page. Truecrypt volume auto-mount Solved! Turns out this option is actually key to auto-mounting the truecrypt volume on the remote side. The following command line does the trick (one line!): rsync $options -e "ssh -p $port -i ../.ssh/id_dsa" --rsync-path="/usr/local/bin/truecrypt -d && /usr/local/bin/truecrypt --fs-options=rw,sync,utf8,uid=$UID,umask=0007 --non-interactive -p $password $pathToVolume $remoteMountDir && rsync" $localSourceDir $user:$remoteMountMountDir Truecrypt volume auto-dismount Still open: how can I unmount the volume when rsync is done? Not sure if the following makes sense to anyone but I give it a try... Right now I am unmounting (truecrypt -d), then mounting again, then continuing with rsync. At this time rsync needs to do its thing but I dont know when its done. Adding ... rsync && truecrypt -d to the command line does not work because then the rsync daemon does not start. This is because rsync starts the daemon with parameter --server on the remote side and that parameter would go to the final truecrypt -d.

    Read the article

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

  • The dynamic Type in C# Simplifies COM Member Access from Visual FoxPro

    - by Rick Strahl
    I’ve written quite a bit about Visual FoxPro interoperating with .NET in the past both for ASP.NET interacting with Visual FoxPro COM objects as well as Visual FoxPro calling into .NET code via COM Interop. COM Interop with Visual FoxPro has a number of problems but one of them at least got a lot easier with the introduction of dynamic type support in .NET. One of the biggest problems with COM interop has been that it’s been really difficult to pass dynamic objects from FoxPro to .NET and get them properly typed. The only way that any strong typing can occur in .NET for FoxPro components is via COM type library exports of Visual FoxPro components. Due to limitations in Visual FoxPro’s type library support as well as the dynamic nature of the Visual FoxPro language where few things are or can be described in the form of a COM type library, a lot of useful interaction between FoxPro and .NET required the use of messy Reflection code in .NET. Reflection is .NET’s base interface to runtime type discovery and dynamic execution of code without requiring strong typing. In FoxPro terms it’s similar to EVALUATE() functionality albeit with a much more complex API and corresponiding syntax. The Reflection APIs are fairly powerful, but they are rather awkward to use and require a lot of code. Even with the creation of wrapper utility classes for common EVAL() style Reflection functionality dynamically access COM objects passed to .NET often is pretty tedious and ugly. Let’s look at a simple example. In the following code I use some FoxPro code to dynamically create an object in code and then pass this object to .NET. An alternative to this might also be to create a new object on the fly by using SCATTER NAME on a database record. How the object is created is inconsequential, other than the fact that it’s not defined as a COM object – it’s a pure FoxPro object that is passed to .NET. Here’s the code: *** Create .NET COM InstanceloNet = CREATEOBJECT('DotNetCom.DotNetComPublisher') *** Create a Customer Object Instance (factory method) loCustomer = GetCustomer() loCustomer.Name = "Rick Strahl" loCustomer.Company = "West Wind Technologies" loCustomer.creditLimit = 9999999999.99 loCustomer.Address.StreetAddress = "32 Kaiea Place" loCustomer.Address.Phone = "808 579-8342" loCustomer.Address.Email = "[email protected]" *** Pass Fox Object and echo back values ? loNet.PassRecordObject(loObject) RETURN FUNCTION GetCustomer LOCAL loCustomer, loAddress loCustomer = CREATEOBJECT("EMPTY") ADDPROPERTY(loCustomer,"Name","") ADDPROPERTY(loCustomer,"Company","") ADDPROPERTY(loCUstomer,"CreditLimit",0.00) ADDPROPERTY(loCustomer,"Entered",DATETIME()) loAddress = CREATEOBJECT("Empty") ADDPROPERTY(loAddress,"StreetAddress","") ADDPROPERTY(loAddress,"Phone","") ADDPROPERTY(loAddress,"Email","") ADDPROPERTY(loCustomer,"Address",loAddress) RETURN loCustomer ENDFUNC Now prior to .NET 4.0 you’d have to access this object passed to .NET via Reflection and the method code to do this would looks something like this in the .NET component: public string PassRecordObject(object FoxObject) { // *** using raw Reflection string Company = (string) FoxObject.GetType().InvokeMember( "Company", BindingFlags.GetProperty,null, FoxObject,null); // using the easier ComUtils wrappers string Name = (string) ComUtils.GetProperty(FoxObject,"Name"); // Getting Address object – then getting child properties object Address = ComUtils.GetProperty(FoxObject,"Address");    string Street = (string) ComUtils.GetProperty(FoxObject,"StreetAddress"); // using ComUtils 'Ex' functions you can use . Syntax     string StreetAddress = (string) ComUtils.GetPropertyEx(FoxObject,"AddressStreetAddress"); return Name + Environment.NewLine + Company + Environment.NewLine + StreetAddress + Environment.NewLine + " FOX"; } Note that the FoxObject is passed in as type object which has no specific type. Since the object doesn’t exist in .NET as a type signature the object is passed without any specific type information as plain non-descript object. To retrieve a property the Reflection APIs like Type.InvokeMember or Type.GetProperty().GetValue() etc. need to be used. I made this code a little simpler by using the Reflection Wrappers I mentioned earlier but even with those ComUtils calls the code is pretty ugly requiring passing the objects for each call and casting each element. Using .NET 4.0 Dynamic Typing makes this Code a lot cleaner Enter .NET 4.0 and the dynamic type. Replacing the input parameter to the .NET method from type object to dynamic makes the code to access the FoxPro component inside of .NET much more natural: public string PassRecordObjectDynamic(dynamic FoxObject) { // *** using raw Reflection string Company = FoxObject.Company; // *** using the easier ComUtils class string Name = FoxObject.Name; // *** using ComUtils 'ex' functions to use . Syntax string Address = FoxObject.Address.StreetAddress; return Name + Environment.NewLine + Company + Environment.NewLine + Address + Environment.NewLine + " FOX"; } As you can see the parameter is of type dynamic which as the name implies performs Reflection lookups and evaluation on the fly so all the Reflection code in the last example goes away. The code can use regular object ‘.’ syntax to reference each of the members of the object. You can access properties and call methods this way using natural object language. Also note that all the type casts that were required in the Reflection code go away – dynamic types like var can infer the type to cast to based on the target assignment. As long as the type can be inferred by the compiler at compile time (ie. the left side of the expression is strongly typed) no explicit casts are required. Note that although you get to use plain object syntax in the code above you don’t get Intellisense in Visual Studio because the type is dynamic and thus has no hard type definition in .NET . The above example calls a .NET Component from VFP, but it also works the other way around. Another frequent scenario is an .NET code calling into a FoxPro COM object that returns a dynamic result. Assume you have a FoxPro COM object returns a FoxPro Cursor Record as an object: DEFINE CLASS FoxData AS SESSION OlePublic cAppStartPath = "" FUNCTION INIT THIS.cAppStartPath = ADDBS( JustPath(Application.ServerName) ) SET PATH TO ( THIS.cAppStartpath ) ENDFUNC FUNCTION GetRecord(lnPk) LOCAL loCustomer SELECT * FROM tt_Cust WHERE pk = lnPk ; INTO CURSOR TCustomer IF _TALLY < 1 RETURN NULL ENDIF SCATTER NAME loCustomer MEMO RETURN loCustomer ENDFUNC ENDDEFINE If you call this from a .NET application you can now retrieve this data via COM Interop and cast the result as dynamic to simplify the data access of the dynamic FoxPro type that was created on the fly: int pk = 0; int.TryParse(Request.QueryString["id"],out pk); // Create Fox COM Object with Com Callable Wrapper FoxData foxData = new FoxData(); dynamic foxRecord = foxData.GetRecord(pk); string company = foxRecord.Company; DateTime entered = foxRecord.Entered; This code looks simple and natural as it should be – heck you could write code like this in days long gone by in scripting languages like ASP classic for example. Compared to the Reflection code that previously was necessary to run similar code this is much easier to write, understand and maintain. For COM interop and Visual FoxPro operation dynamic type support in .NET 4.0 is a huge improvement and certainly makes it much easier to deal with FoxPro code that calls into .NET. Regardless of whether you’re using COM for calling Visual FoxPro objects from .NET (ASP.NET calling a COM component and getting a dynamic result returned) or whether FoxPro code is calling into a .NET COM component from a FoxPro desktop application. At one point or another FoxPro likely ends up passing complex dynamic data to .NET and for this the dynamic typing makes coding much cleaner and more readable without having to create custom Reflection wrappers. As a bonus the dynamic runtime that underlies the dynamic type is fairly efficient in terms of making Reflection calls especially if members are repeatedly accessed. © Rick Strahl, West Wind Technologies, 2005-2010Posted in COM  FoxPro  .NET  CSharp  

    Read the article

  • Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design

    - by SeanMcAlinden
    Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and caching mechanism For the latest code go to http://rapidioc.codeplex.com/ Before getting too involved in generating the proxy, I thought it would be worth while going through the intended design, this is important as the next step is to start creating the constructors for the proxy. Each proxy derives from a specified type The proxy has a corresponding constructor for each of the base type constructors The proxy has overrides for all methods and properties marked as Virtual on the base type For each overridden method, there is also a private method whose sole job is to call the base method. For each overridden method, a delegate is created whose sole job is to call the private method that calls the base method. The following class diagram shows the main classes and interfaces involved in the interception process. I’ll go through each of them to explain their place in the overall proxy.   IProxy Interface The proxy implements the IProxy interface for the sole purpose of adding custom interceptors. This allows the created proxy interface to be cast as an IProxy and then simply add Interceptors by calling it’s AddInterceptor method. This is done internally within the proxy building process so the consumer of the API doesn’t need knowledge of this. IInterceptor Interface The IInterceptor interface has one method: Handle. The handle method accepts a IMethodInvocation parameter which contains methods and data for handling method interception. Multiple classes that implement this interface can be added to the proxy. Each method override in the proxy calls the handle method rather than simply calling the base method. How the proxy fully works will be explained in the next section MethodInvocation. IMethodInvocation Interface & MethodInvocation class The MethodInvocation will contain one main method and multiple helper properties. Continue Method The method Continue() has two functions hidden away from the consumer. When Continue is called, if there are multiple Interceptors, the next Interceptors Handle method is called. If all Interceptors Handle methods have been called, the Continue method then calls the base class method. Properties The MethodInvocation will contain multiple helper properties including at least the following: Method Name (Read Only) Method Arguments (Read and Write) Method Argument Types (Read Only) Method Result (Read and Write) – this property remains null if the method return type is void Target Object (Read Only) Return Type (Read Only) DefaultInterceptor class The DefaultInterceptor class is a simple class that implements the IInterceptor interface. Here is the code: DefaultInterceptor namespace Rapid.DynamicProxy.Interception {     /// <summary>     /// Default interceptor for the proxy.     /// </summary>     /// <typeparam name="TBase">The base type.</typeparam>     public class DefaultInterceptor<TBase> : IInterceptor<TBase> where TBase : class     {         /// <summary>         /// Handles the specified method invocation.         /// </summary>         /// <param name="methodInvocation">The method invocation.</param>         public void Handle(IMethodInvocation<TBase> methodInvocation)         {             methodInvocation.Continue();         }     } } This is automatically created in the proxy and is the first interceptor that each method override calls. It’s sole function is to ensure that if no interceptors have been added, the base method is still called. Custom Interceptor Example A consumer of the Rapid.DynamicProxy API could create an interceptor for logging when the FirstName property of the User class is set. Just for illustration, I have also wrapped a transaction around the methodInvocation.Coninue() method. This means that any overriden methods within the user class will run within a transaction scope. MyInterceptor public class MyInterceptor : IInterceptor<User<int, IRepository>> {     public void Handle(IMethodInvocation<User<int, IRepository>> methodInvocation)     {         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name seting to: " + methodInvocation.Arguments[0]);         }         using (TransactionScope scope = new TransactionScope())         {             methodInvocation.Continue();         }         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name has been set to: " + methodInvocation.Arguments[0]);         }     } } Overridden Method Example To show a taster of what the overridden methods on the proxy would look like, the setter method for the property FirstName used in the above example would look something similar to the following (this is not real code but will look similar): set_FirstName public override void set_FirstName(string value) {     set_FirstNameBaseMethodDelegate callBase =         new set_FirstNameBaseMethodDelegate(this.set_FirstNameProxyGetBaseMethod);     object[] arguments = new object[] { value };     IMethodInvocation<User<IRepository>> methodInvocation =         new MethodInvocation<User<IRepository>>(this, callBase, "set_FirstName", arguments, interceptors);          this.Interceptors[0].Handle(methodInvocation); } As you can see, a delegate instance is created which calls to a private method on the class, the private method calls the base method and would look like the following: calls base setter private void set_FirstNameProxyGetBaseMethod(string value) {     base.set_FirstName(value); } The delegate is invoked when methodInvocation.Continue() is called within an interceptor. The set_FirstName parameters are loaded into an object array. The current instance, delegate, method name and method arguments are passed into the methodInvocation constructor (there will be more data not illustrated here passed in when created including method info, return types, argument types etc.) The DefaultInterceptor’s Handle method is called with the methodInvocation instance as it’s parameter. Obviously methods can have return values, ref and out parameters etc. in these cases the generated method override body will be slightly different from above. I’ll go into more detail on these aspects as we build them. Conclusion I hope this has been useful, I can’t guarantee that the proxy will look exactly like the above, but at the moment, this is pretty much what I intend to do. Always worth downloading the code at http://rapidioc.codeplex.com/ to see the latest. There will also be some tests that you can debug through to help see what’s going on. Cheers, Sean.

    Read the article

  • Compiling examples for consuming the REST Endpoints for WCF Service using Agatha

    - by REA_ANDREW
    I recently made two contributions to the Agatha Project by Davy Brion over on Google Code, and one of the things I wanted to follow up with was a post showing examples and some, seemingly required tid bits.  The contributions which I made where: To support StructureMap To include REST (JSON and XML) support for the service contract The examples which I have made, I want to format them so they fit in with the current format of examples over on Agatha and hopefully create and submit a third patch which will include these examples to help others who wish to use these additions. Whilst building these examples for both XML and JSON I have learnt a couple of things which I feel are not really well documented, but are extremely good practice and once known make perfect sense.  I have chosen a real basic e-commerce context for my example Requests and Responses, and have also made use of the excellent tool AutoMapper, again on Google Code. Setting the scene I have followed the Pipes and Filters Pattern with the IQueryable interface on my Repository and exposed the following methods to query Products: IQueryable<Product> GetProducts(); IQueryable<Product> ByCategoryName(this IQueryable<Product> products, string categoryName) Product ByProductCode(this IQueryable<Product> products, String productCode) I have an interface for the IProductRepository but for the concrete implementation I have simply created a protected getter which populates a private List<Product> with 100 test products with random data.  Another good reason for following an interface based approach is that it will demonstrate usage of my first contribution which is the StructureMap support.  Finally the two Domain Objects I have made are Product and Category as shown below: public class Product { public String ProductCode { get; set; } public String Name { get; set; } public Decimal Price { get; set; } public Decimal Rrp { get; set; } public Category Category { get; set; } }   public class Category { public String Name { get; set; } }   Requirements for the REST Support One of the things which you will notice with Agatha is that you do not have to decorate your Request and Response objects with the WCF Service Model Attributes like DataContract, DataMember etc… Unfortunately from what I have seen, these are required if you want the same types to work with your REST endpoint.  I have not tried but I assume the same result can be achieved by simply decorating the same classes with the Serializable Attribute.  Without this the operation will fail. Another surprising thing I have found is that it did not work until I used the following Attribute parameters: Name Namespace e.g. [DataContract(Name = "GetProductsRequest", Namespace = "AgathaRestExample.Service.Requests")] public class GetProductsRequest : Request { }   Although I was surprised by this, things kind of explained themselves when I got round to figuring out the exact construct required for both the XML and the REST.  One of the things which you already know and are then reminded of is that each of your Requests and Responses ultimately inherit from an abstract base class respectively. This information needs to be represented in a way native to the format being used.  I have seen this in XML but I have not seen the format which is required for the JSON. JSON Consumer Example I have used JQuery to create the example and I simply want to make two requests to the server which as you will know with Agatha are transmitted inside an array to reduce the service calls.  I have also used a tool called json2 which is again over at Google Code simply to convert my JSON expression into its string format for transmission.  You will notice that I specify the type of Request I am using and the relevant Namespace it belongs to.  Also notice that the second request has a parameter so each of these two object are representing an abstract Request and the parameters of the object describe it. <script type="text/javascript"> var bodyContent = $.ajax({ url: "http://localhost:50348/service.svc/json/processjsonrequests", global: false, contentType: "application/json; charset=utf-8", type: "POST", processData: true, data: JSON.stringify([ { __type: "GetProductsRequest:AgathaRestExample.Service.Requests" }, { __type: "GetProductsByCategoryRequest:AgathaRestExample.Service.Requests", CategoryName: "Category1" } ]), dataType: "json", success: function(msg) { alert(msg); } }).responseText; </script>   XML Consumer Example For the XML Consumer example I have chosen to use a simple Console Application and make a WebRequest to the service using the XML as a request.  I have made a crude static method which simply reads from an XML File, replaces some value with a parameter and returns the formatted XML.  I say crude but it simply shows how XML Templates for each type of Request could be made and then have a wrapper utility in whatever language you use to combine the requests which are required.  The following XML is the same Request array as shown above but simply in the XML Format. <?xml version="1.0" encoding="utf-8" ?> <ArrayOfRequest xmlns="http://schemas.datacontract.org/2004/07/Agatha.Common" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Request i:type="a:GetProductsRequest" xmlns:a="AgathaRestExample.Service.Requests"/> <Request i:type="a:GetProductsByCategoryRequest" xmlns:a="AgathaRestExample.Service.Requests"> <a:CategoryName>{CategoryName}</a:CategoryName> </Request> </ArrayOfRequest>   It is funny because I remember submitting a question to StackOverflow asking whether there was a REST Client Generation tool similar to what Microsoft used for their RestStarterKit but which could be applied to existing services which have REST endpoints attached.  I could not find any but this is now definitely something which I am going to build, as I think it is extremely useful to have but also it should not be too difficult based on the information I now know about the above.  Finally I thought that the Strategy Pattern would lend itself really well to this type of thing so it can accommodate for different languages. I think that is about it, I have included the code for the example Console app which I made below incase anyone wants to have a mooch at the code.  As I said above I want to reformat these to fit in with the current examples over on the Agatha project, but also now thinking about it, make a Documentation Web method…{brain ticking} :-) Cheers for now and here is the final bit of code: static void Main(string[] args) { var request = WebRequest.Create("http://localhost:50348/service.svc/xml/processxmlrequests"); request.Method = "POST"; request.ContentType = "text/xml"; using(var writer = new StreamWriter(request.GetRequestStream())) { writer.WriteLine(GetExampleRequestsString("Category1")); } var response = request.GetResponse(); using(var reader = new StreamReader(response.GetResponseStream())) { Console.WriteLine(reader.ReadToEnd()); } Console.ReadLine(); } static string GetExampleRequestsString(string categoryName) { var data = File.ReadAllText(Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "ExampleRequests.xml")); data = data.Replace("{CategoryName}", categoryName); return data; } }

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • Routing Issue in ASP.NET MVC 3 RC 2

    - by imran_ku07
         Introduction:             Two weeks ago, ASP.NET MVC team shipped the ASP.NET MVC 3 RC 2 release. This release includes some new features and some performance optimization. This release also fixes most of the bugs but still some minor issues are present in this release. Some of these issues are already discussed by Scott Guthrie at Update on ASP.NET MVC 3 RC2 (and a workaround for a bug in it). In addition to these issues, I have found another issue in this release regarding routing. In this article, I will show you the issue regarding routing and a simple workaround for this issue.       Description:             The easiest way to understand an issue is to reproduce it in the application. So create a MVC 2 application and a MVC 3 RC 2 application. Then in both applications, just open global.asax file and update the default route as below,     routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id1}/{id2}", // URL with parameters new { controller = "Home", action = "Index", id1 = UrlParameter.Optional, id2 = UrlParameter.Optional } // Parameter defaults );              Then just open Index View and add the following lines,    <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Home Page </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <% Html.RenderAction("About"); %> </asp:Content>             The above view will issue a child request to About action method. Now run both applications. ASP.NET MVC 2 application will run just fine. But ASP.NET MVC 3 RC 2 application will throw an exception as shown below,                  You may think that this is a routing issue but this is not the case here as both ASP.NET MVC 2 and ASP.NET MVC  3 RC 2 applications(created above) are built with .NET Framework 4.0 and both will use the same routing defined in System.Web. Something is wrong in ASP.NET MVC 3 RC 2. So after digging into ASP.NET MVC source code, I have found that the UrlParameter class in ASP.NET MVC 3 RC 2 overrides the ToString method which simply return an empty string.     public sealed class UrlParameter { public static readonly UrlParameter Optional = new UrlParameter(); private UrlParameter() { } public override string ToString() { return string.Empty; } }             In MVC 2 the ToString method was not overridden. So to quickly fix the above problem just replace UrlParameter.Optional default value with a different value other than null or empty(for example, a single white space) or replace UrlParameter.Optional default value with a new class object containing the same code as UrlParameter class have except the ToString method is not overridden (or with a overridden ToString method that return a string value other than null or empty). But by doing this you will loose the benefit of ASP.NET MVC 2 Optional URL Parameters. There may be many different ways to fix the above problem and not loose the benefit of optional parameters. Here I will create a new class MyUrlParameter with the same code as UrlParameter class have except the ToString method is not overridden. Then I will create a base controller class which contains a constructor to remove all MyUrlParameter route data parameters, same like ASP.NET MVC doing with UrlParameter route data parameters early in the request.     public class BaseController : Controller { public BaseController() { if (System.Web.HttpContext.Current.CurrentHandler is MvcHandler) { RouteValueDictionary rvd = ((MvcHandler)System.Web.HttpContext.Current.CurrentHandler).RequestContext.RouteData.Values; string[] matchingKeys = (from entry in rvd where entry.Value == MyUrlParameter.Optional select entry.Key).ToArray(); foreach (string key in matchingKeys) { rvd.Remove(key); } } } } public class HomeController : BaseController { public ActionResult Index(string id1) { ViewBag.Message = "Welcome to ASP.NET MVC!"; return View(); } public ActionResult About() { return Content("Child Request Contents"); } }     public sealed class MyUrlParameter { public static readonly MyUrlParameter Optional = new MyUrlParameter(); private MyUrlParameter() { } }     routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id1}/{id2}", // URL with parameters new { controller = "Home", action = "Index", id1 = MyUrlParameter.Optional, id2 = MyUrlParameter.Optional } // Parameter defaults );             MyUrlParameter class is a copy of UrlParameter class except that MyUrlParameter class not overrides the ToString method. Note that the default route is modified to use MyUrlParameter.Optional instead of UrlParameter.Optional. Also note that BaseController class constructor is removing MyUrlParameter parameters from the current request route data so that the model binder will not bind these parameters with action method parameters. Now just run the ASP.NET MVC 3 RC 2 application again, you will find that it runs just fine.             In case if you are curious to know that why ASP.NET MVC 3 RC 2 application throws an exception if UrlParameter class contains a ToString method which returns an empty string, then you need to know something about a feature of routing for url generation. During url generation, routing will call the ParsedRoute.Bind method internally. This method includes a logic to match the route and build the url. During building the url, ParsedRoute.Bind method will call the ToString method of the route values(in our case this will call the UrlParameter.ToString method) and then append the returned value into url. This method includes a logic after appending the returned value into url that if two continuous returned values are empty then don't match the current route otherwise an incorrect url will be generated. Here is the snippet from ParsedRoute.Bind method which will prove this statement.       if ((builder2.Length > 0) && (builder2[builder2.Length - 1] == '/')) { return null; } builder2.Append("/"); ........................................................... ........................................................... ........................................................... ........................................................... if (RoutePartsEqual(obj3, obj4)) { builder2.Append(UrlEncode(Convert.ToString(obj3, CultureInfo.InvariantCulture))); continue; }             In the above example, both id1 and id2 parameters default values are set to UrlParameter object and UrlParameter class include a ToString method that returns an empty string. That's why this route will not matched.            Summary:             In this article I showed you the issue regarding routing and also showed you how to workaround this problem. I explained this issue with an example by creating a ASP.NET MVC 2 and a ASP.NET MVC 3 RC 2 application. Finally I also explained the reason for this issue. Hopefully you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >