Search Results

Search found 393 results on 16 pages for 'scientific'.

Page 14/16 | < Previous Page | 10 11 12 13 14 15 16  | Next Page >

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • How would I instruct extconf.rb to use additional g++ optimization flags, and which are advisable?

    - by mohawkjohn
    I'm using Rice to write a C++ extension for a Ruby gem. The extension is in the form of a shared object (.so) file. This requires 'mkmf-rice' instead of 'mkmf', but the two (AFAIK) are pretty similar. By default, the compiler uses the flags -g -O2. Personally, I find this kind of silly, since it's hard to debug with any optimization enabled. I've resorted to editing the Makefile to take out the flags I don't like (e.g., removing -fPIC -shared when I need to debug using main() instead of Ruby's hooks). But I figure there's got to be a better way. I know I can just do $CPPFLAGS += " -DRICE" to add additional flags. But how do I remove things without editing the Makefile directly? A secondary question: what optimizations are safe for shared objects loaded by Ruby? Can I do things like -funroll-loops? What do you all recommend? It's a scientific computing project, so the faster the better. Memory is not much of an issue. Many thanks!

    Read the article

  • How to convert Big Endian and how to flip the highest bit?

    - by Robert Frank
    I am using a TStream to read binary data (thanks to this post: http://stackoverflow.com/questions/2878180/how-to-use-a-tfilestream-to-read-2d-matrices-into-dynamic-array). My next problem is that the data is Big Endian. From my reading, the Swap() method is seemingly deprecated. How would I swap the types below? 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point - Are IEEE affected by Big Endian? And, finally, since the data is unsigned, the creators of this dataset have stored the unsigned values as signed integers (excluding the IEEE). They instruct that one need only add an offset (2^15, 2^31, and 2^63) to recover the unsigned data. But, they note that flipping the most significant bit is the fastest way to do that. How does one efficiently flip the most significant bit of a 16, 32, or 64-bit integer? So, if the data on disk (16-bit) is "85 FB" - the desired result after reading the data and swapping and bit flipping would be 1531. Is there a way to accomplish the swapping and bit flipping with generics so it fits into the generic answer at the link above? Yes, kids, THIS is how scientific astronomical data is stored by NASA, ESO, and all professional astronomers. This FITS standard is considered by some to be one of the most successful standards ever created in its proliferation and flexibility!

    Read the article

  • Using java to create a logistic model - arrays and properties

    - by Oliver Burdekin
    I'm currently trying to create a java model that will solve a problem we have. On a voluntary expedition each week we have some people leaving and some new people arriving. Accommodation is in tents. The tents sleep different numbers of people and certain rules apply. Males and females cannot be mixed and volunteers can be one of four types - school children/ research assistants/ scientific staff/ school teachers So types of volunteer and sexes cannot be mixed. Each week the manager spends hours trying to work this out so I've offered to make this model to keep my coding skills up. At present I'm working with arrays. Each tent is a 2D array [4][x] where x is the number of people it sleeps (each person sleeping there has 4 attributes). Each person is a 1D array with 4 attributes [4]. The idea is to check where people can go, cause the minimum movement for people staying on and solve this logistic problem. Does anyone have any better suggestions as to how to solve this? At present I'm finding it necessary to write a lot of code setting up and querying arrays. Any help is appreciated.

    Read the article

  • Fill lower matrix with vector by row, not column

    - by mhermans
    I am trying to read in a variance-covariance matrix written out by LISREL in the following format in a plain text, whitespace separated file: 0.23675E+01 0.86752E+00 0.28675E+01 -0.36190E+00 -0.36190E+00 0.25381E+01 -0.32571E+00 -0.32571E+00 0.84425E+00 0.25598E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.21120E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.91200E+00 0.21120E+01 This is actually a lower diagonal matrix (including diagonal): 0.23675E+01 0.86752E+00 0.28675E+01 -0.36190E+00 -0.36190E+00 0.25381E+01 -0.32571E+00 -0.32571E+00 0.84425E+00 0.25598E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.21120E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.91200E+00 0.21120E+01 I can read in the values correctly with scan() or read.table(fill=T). I am however not able to correctly store the read-in vector in a matrix. The following code S <- diag(6) S[lower.tri(S,diag=T)] <- d fills the lower matrix by column, while it should fill it by row. Using matrix() does allow for the option byrow=TRUE, but this will fill in the whole matrix, not just the lower half (with diagonal). Is it possible to have both: only fill the lower matrix (with diagonal) and do it by row? (separate issue I'm having: LISREL uses 'D+01' while R only recognises 'E+01' for scientific notation. Can you change this in R to accept also 'D'?)

    Read the article

  • Show a number with specified number of significant digits

    - by dreeves
    I use the following function to convert a number to a string for display purposes (don't use scientific notation, don't use a trailing dot, round as specified): (* Show Number. Convert to string w/ no trailing dot. Round to the nearest r. *) Unprotect[Round]; Round[x_,0] := x; Protect[Round]; shn[x_, r_:0] := StringReplace[ ToString@NumberForm[Round[N@x,r], ExponentFunction->(Null&)], re@"\\.$"->""] (Note that re is an alias for RegularExpression.) That's been serving me well for years. But sometimes I don't want to specify the number of digits to round to, rather I want to specify a number of significant figures. For example, 123.456 should display as 123.5 but 0.00123456 should display as 0.001235. To get really fancy, I might want to specify significant digits both before and after the decimal point. For example, I might want .789 to display as 0.8 but 789.0 to display as 789 rather than 800. Do you have a handy utility function for this sort of thing, or suggestions for generalizing my function above? Related: Suppressing a trailing "." in numerical output from Mathematica

    Read the article

  • Custom initrd init script: how to create /dev/initctl

    - by Posco Grubb
    I have a virtual machine (VMM is Xen 3.3) equipped with two IDE HDD's (/dev/hda and /dev/hdb). The root file system is in /dev/hda1, where Scientific Linux 5.4 is installed. /dev/hdb contains an empty ext2 file system. I want to protect the root file system from writes by the VM by using aufs (AnotherUnionFS) to layer a writable file system on top of the root file system. The changes to / will be written to the file system located on /dev/hdb. (Furthermore, outside the VM, the file backing the /dev/hda will also be set to read-only permissions, so the VMM should also prevent the VM from modifying at that level.) (The purpose of this setup: be able to corrupt a virtual machine using software-implemented fault injection but preserve the file system image in order to quickly reboot the VM to a fault-free state.) How do I get an initrd init script to do the necessary mounts to create the union file system? I've tried 2 approaches: I've tried modifying the nash script that mkinitrd creates, but I don't know what setuproot and switchroot do and how to make them use my aufs as the new root. Apparently, nobody else here knows either. (EDIT: I take that back.) I've tried building a LiveCD (using linux-live-6.3.0) and then modifying the Bash /linuxrc script from the generated initrd, and I got the mounts correct, but the final /sbin/init complains about /dev/initctl. Specifically, my /linuxrc mounts the aufs at /union. The last few lines of /linuxrc effectively do the following: cd /union mkdir -p mnt/live pivot_root . mnt/live exec sbin/chroot . sbin/init </dev/console >/dev/console 2>&1 When init starts, it outputs something like init: /dev/initctl: No such file or directory. What is supposed to create this FIFO? I found no such filename in the original linuxrc and liblinuxlive scripts. I tried creating it via "mkfifo /dev/initctl", but then init complained about a timeout opening or writing to the FIFO. Would appreciate any help or pointers. Thanks.

    Read the article

  • Stack-based keyboard delay using Logitech MX3100 keyboard

    - by Mark S. Rasmussen
    I've been using a Logitech Cordless Desktop MX3100 keyboard for quite a while. I've never really had any problems, except for the occasional typo. I noticed however that I tended make the typo "Laod" instead of "Load", quite a bit more often than any other typos. As it started to get on my nerves, I decided to do some testing. What I found out was than when I write lowercase "load", I'd never make the typo. All uppercase, or just uppercase L, I'd make the typo quite often. My actual (very scientific) testing is probably best described by showing the output: moatmoatmoat MoatMoatMoat loatloatloat LaotLaotLaot loafloafloaf LaofLaofLaof hoathoathoat HoatHoatHoat hoadhoadhoad HoadHoadHoad lortlortlort LrotLrotLrot What i found out was that whenever shift was depressed, typing an uppercase "L" would induce a significant lag if the next character was an "o", compared to the lag of the any other key: High "o" lag: LoLoLoLoLoLo No "a" lag: LaLaLaLaLaLa No lag for neither "o" nor "a": lolololololo lalalalalala By realizing this I regained a slight bit of sanity as I knew I wasn't coming down with a case of Parkinsons. I was actually typing correctly, the lag just interpreted it wrongly. Now, what really bugs me is that I can't fathom how this is occurring. What I'm actually typing, in physical order, is this: L - o - a - d, and yet, the "a" is output before the "o", even though "o" was pressed before "a". So while the keyboard is processing the "Lo" combo, the "a" gets prioritized and is inserted before the "o" is done processing, resulting in Laod instead of Load. And this only happens when typing "Lo", not when typing lowercase "lo". This problem could stem from the keyboard hardware, the receiver hardware or the keyboard software driver. No matter the fault location however, I can't imagine how this could be implemented as anything but a FIFO queue. A general delay, sure, I could live with that, albeit I'd be irritated. But a lag affecting different keys differently, and even resulting in unpredictable outcome - that just doesn't make any sense. I've solved the problem by just switching to a wired keyboard. I just can't shake it off me though; what kind of bug/error/scenario would result in a case like this? Edit: It's been suggested that I stop drinking Red Bull and stick to water instead. While that may actually help solve the issue, I'm really not looking for a solution as such. I'm more interested in an explanation of how this could happen, as I can't imagine any viable technical solution that could result in this behavior.

    Read the article

  • Mounting a drive in Ubuntu 9.10 (Karmic Koala)

    - by morpheous
    I have just installed Ubuntu on a machine that previously had XP installed on it. The machine has 2 HDD (hard disk drives). I opted to install Ubuntu completely over XP. I am new to Linux, and I am still learning how to navigate teh file structure. However, AFAICT), there is only one drive. I want to be able to store programs etc on the first drive, and store data (program output etc) on the second drive. It appears Ubuntu is not aware that I have 2 drives (on XP, these were drives C and D). How can I mount the second drive (ideally, I want to do this automatically on login, so that the drive is available to me whenever I login - withou manual intervention from me) In XP, I could refer to files on a specific drive by prefixing with the drive letter (e.g. c:\foobar.cpp and d:\foobar.dat). I suspect the notation on ubuntu is different. How may I specify specific files on different drives? Last but notbthe least (a bit unrelated to previous questions). This relates to direcory structure again. I am a developer (C++ for desktops and PHP for websites), I want to install the following apps/ libraries. i). Apache 2.2 ii). PHP 5.2.11 iii). MySQL (5.1) iv). SVN v). Netbeans vi). C++ development tools (gcc, gdb, emacs etc) vii). QT toolkit viii). Some miscellaeous scientific software (e.g. www.r-project.org, www.gnu.org/software/octave/) I would be grateful if a someone can recommend a directory layout for these applications. Regarding development, I would also be grateful if someone could point out where to store my project and source files i.e: (i) *.cpp, *.hpp, *.mak files for cpp projects (ii) individual websites On my XP machine the layout for C++ dev was like this: c:\dev\devtools (common libs and headers etc) c:\dev\workarea (root folder for projects) c:\dev\workarea\c++ (c++ projects) c:\dev\workarea\websites (web projects) I would like to create a similar folder structure on the linux machine, but its not clear whether to place these folders under /, /usr, /home or swomewhere else (there seems to be abffling number of choices, so I want to get it "right" first time - i.e having a directory structure that most developer use, so it is easier when communicating with other ubuntu/linux developers)

    Read the article

  • Port forwarding not working properly

    - by sudo work
    I'm trying to host a small web server from my home network; however, I have not been able to successfully port forward ports to the local server. My current network topology looks like this: Cable Modem/Router - Secondary Wireless Router - Many computers (including server) The modem/router I'm using is a Cisco (Scientific Atlantic) DPC2100, provided by my ISP. The wireless router that I'm using as the central hub to my home network is a Linksys E3000. The computer being used as a server is running Ubuntu 10.04 Server Edition. The main issue is that I can't access the server remotely, using my WAN IP address. I have port forwarded my wireless router; however, I believe that I need to somehow set my modem to bridge mode. As far as I can tell though, this isn't possible. Here are the various IP address settings: DPC2100 WAN: 69.xxx.xxx.xxx Internal IP: 192.168.100.1 Internal Network: 192.168.7.0 E3000 IP Address: 192.168.7.2 Gateway: 192.168.7.1 Internal IP: 192.168.1.1 Internal Network: 192.168.1.0 Server IP Address: 192.168.1.123 Gateway: 192.168.1.1 Now I can do an nmap at various nodes, and here are the results (from the server): nmap localhost: 22,25,53,80,110,139,143,445,631,993,995,3306,5432,8080 open nmap 192.168.7.2: 22,25,80 (filtered),110,139,445 open (ports I have forwarded in the E3000)* nmap 69.xxx.xxx.xxx: 1720 open *For some reason, I can SSH into the server at 192.168.7.2, but not view the website. Here are also some other settings: /etc/hosts/ 127.0.0.1 localhost 127.0.1.1 servername ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters /etc/apache2/sites-available/default snippet <VirtualHost *:80> DocumentRoot /srv/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> ... </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> ... </Directory> ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> ... </Directory> </VirtualHost> Let me know if you need any other information; some stuff probably slipped my mind.

    Read the article

  • Two DHCP Servers, Block Clients for one of them?

    - by Rilindo
    I am building out a kickstart network that resides on a different VLAN uses its own DHCP server. For some reason, my kickstart clients kept getting assign IPs from my primary DHCP server. The way I have it set up is that I have a primary DHCP server on this router here: 192.168.15.1 Connected to that DHCP server is a switch with the IP of 192.168.15.2. My kickstart (Scientific Linux) server is connected to that switch on two ports: Port 2 - where the kickstart server communicates to the rest of the production network via eth0. The IP assigned to the server on that interface is 192.168.15.100 (on eth0). The details are: Interface: eth0 IP: 192.168.15.100 Netmask: 255.255.255.0 Gateway: 192.168.15.1 Port 7 - has it's own VLAN ID (along with port 8). The kickstart server is connected to that port with the IP of 172.16.15.100 (on eth1). Again, the details are: Interface: eth1 IP: 172.16.15.100 Netmask: 255.255.255.0 Gateway: none The kickstart server runs its own DHCP server and assigns them over the eth1. Most of the kick starts are built over the kickstart VLAN through port 8. To prevent the kickstart DHCP server from assigning addresses over the production network, I have the route setup like so: route add -host 255.255.255.255 dev eth1 At this point, the clients kept getting assign IPs from the 192.168.15.1 DHCP server. I need to figure out a way to block client requests from reaching that DHCP. Its should be noted that but I also build KVM hosts on the kickstart server as well, so I need those KVMs to have the ability to get DHCP requests from the 192.168.15.1 DHCP server via the bridge network once I finish resolved this particular problem. (Currently, they communicate via NAT). So what would be done to resolve this? Through iptables or some sort of routing I need to put in? I tried to limited to requests via IPtables on that interface, allowing DHCP requests for 172.16.15.x network: -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 69 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 69 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 68 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 68 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 67 -j ACCEPT And rejects assignments on eth1 from 192.168.15.x network: -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 69 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 69 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 68 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 68 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 67 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 67 -j REJECT Nope. :(

    Read the article

  • CodePlex Daily Summary for Sunday, March 07, 2010

    CodePlex Daily Summary for Sunday, March 07, 2010New ProjectsAlgorithminator: Universal .NET algorithm visualizer, which helps you to illustrate any algorithm, written in any .NET language. Still in development.ALToolkit: Contains a set of handy .NET components/classes. Currently it contains: * A Numeric Text Box (an Extended NumericUpDown) * A Splash Screen base fo...Automaton Home: Automaton is a home automation software built with a n-Tier, MVVM pattern utilzing WCF, EF, WPF, Silverlight and XBAP.Developer Controls: Developer Controls contains various controls to help build applications that can script/write code.Dynamic Reference Manager: Dynamic Reference Manager is a set (more like a small group) of classes and attributes written in C# that allows any .NET program to reference othe...indiologic: Utilities of an IndioNeural Cryptography in F#: This project is my magistracy resulting work. It is intended to be an example of using neural networks in cryptography. Hashing functions are chose...Particle Filter Visualization: Particle Filter Visualization Program for the Intel Science and Engineering FairPólya: Efficient, immutable, polymorphic collections. .Net lacks them, we provide them*. * By we, we mean I; and by efficient, I mean hopefully so.project euler solutions from mhinze: mhinze project euler solutionsSilverlight 4 and WCF multi layer: Silverlight 4 and WCF multi layersqwarea: Project for a browser-based, minimalistic, massively multiplayer strategy game. Part of the "Génie logiciel et Cloud Computing" course of the ENS (...SuperSocket: SuperSocket, a socket application framework can build FTP/SMTP/POP server easilyToast (for ASP.NET MVC): Dynamic, developer & designer friendly content injection, compression and optimization for ASP.NET MVCNew ReleasesALToolkit: ALToolkit 1.0: Binary release of the libraries containing: NumericTextBox SplashScreen Based on the VB.NET code, but that doesn't really matter.Blacklist of Providers: 1.0-Milestone 1: Blacklist of Providers.Milestone 1In this development release implemented - Main interface (Work Item #5453) - Database (Work Item #5523)C# Linear Hash Table: Linear Hash Table b2: Now includes a default constructor, and will throw an exception if capacity is not set to a power of 2 or loadToMaintain is below 1.Composure: CassiniDev-Trunk-40745-VS2010.rc1.NET4: A simple port of the CassiniDev portable web server project for Visual Studio 2010 RC1 built against .NET 4.0. The WCF tests currently fail unless...Developer Controls: DevControls: These are the version 1.0 releases of these controls. Download the individually or all together (in a .zip file). More releases coming soon!Dynamic Reference Manager: DRM Alpha1: This is the first release. I'm calling it Alpha because I intend implementing other functions, but I do not intend changing the way current functio...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.3: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...GKO Libraries: GKO Libraries 0.1 Alpha: 0.1 AlphaHome Access Plus+: v3.0.3.0: Version 3.0.3.0 Release Change Log: Added Announcement Box Removed script files that aren't needed Fixed & issue in directory path Stylesheet...Icarus Scene Engine: Icarus Scene Engine 1.10.306.840: Icarus Professional, Icarus Player, the supporting software for Icarus Scene Engine, with some included samples, and the start of a tutorial (with ...mavjuz WndLpt: wndlpt-0.2.5: New: Response to 5 LPT inputs "test i 1" New: Reaction to 12 LPT outputs "test q 8" New: Reaction to all LPT pins "test pin 15" New: Syntax: ...Neural Cryptography in F#: Neural Cryptography 0.0.1: The most simple version of this project. It has a neural network that works just like logical AND and a possibility to recreate neural network from...Password Provider: 1.0.3: This release fixes a bug which caused the program to crash when double clicking on a generic item.RoTwee: RoTwee 6.2.0.0: New feature is as next. 16649 Add hashtag for tweet of tune.Now you can tweet your playing tune with hashtag.Visual Studio DSite: Picture Viewer (Visual C++ 2008): This example source code allows you to view any picture you want in the click of a button. All you got to do is click the button and browser via th...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.8.00: Whats New File Browser: Folders & Files View reworked File Browser: Folders & Files View reworked File Browser: Folders are displayed as TreeVi...WSDLGenerator: WSDLGenerator 0.0.0.4: - replaced CommonLibrary.dll by CommandLineParser.dll - added better support for custom complex typesMost Popular ProjectsMetaSharpSilverlight ToolkitASP.NET Ajax LibraryAll-In-One Code FrameworkWindows 7 USB/DVD Download Toolニコ生アラートWindows Double ExplorerVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Caliburn: An Application Framework for WPF and SilverlightArkSwitchMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFasterflect - A Fast and Simple Reflection APIFluent Assertions

    Read the article

  • WP7 “Phantom Data” Source Possibly Revealed?

    - by Bil Simser
    Recently there’s been rumours floating around regarding “phantom” Windows Phone 7 data being magically sent and received on the latest WP7 phones. The news has mostly been floating around twitter so I didn’t pay it much attention. The BBC Technology News picked it up so I thought I would look more into it myself seeing that we have WP7 phones and maybe there was some truth to all this (and more importantly what was the cause). Full disclosure. I don’t have a lot of data points around this. This is from looking at a few phone logs, changing the configuration and looking back again after the change. I haven’t done a clean baseline test nor have I done testing with hundreds of phones. I leave the experience up to the reader to decide. So I went spelunking into the phone logs to see what was up. Most providers will show you data usage, at least on a daily basis. I lucked out with the provider and plan in that they provide hourly breakdowns. Here’s a snapshot from my usage throughout one night. Timestamp Data Usage 12:38:30 AM 2098 Kilobytes 1:30:30 AM 2 Kilobytes 2:38:30 AM 7118 Kilobytes 3:38:30 AM 6622 Kilobytes 4:38:30 AM 76 Kilobytes 5:38:30 AM 29 Kilobytes 6:38:30 AM 19 Kilobytes 7:38:30 AM 20 Kilobytes So a few observations from this data: Data seems to be collected on a regular basis. Looking at some other people phone logs, the times vary but it’s always hourly. There’s not a tremendous amount of data here (about 16 megabytes) but it seems like a lot for 7 hours The phone was connected to my home Wifi during this period Nothing was running and the phone was in a locked state Like I said, not a lot of data but it adds up. 16MB for 7 hours = about 50MB in a 24 hour period. That’s just plain old data being collected (somewhere, somehow) and not actual usage (Marketplace, Email, Browsing, etc.). Besides, when connected to a WiFi network you shouldn’t be charged data usage from your phone company (in theory, YMMV). After reviewing the logs I made a theory that the only thing that could possibly be sending data is the Feedback feature. With no other apps running under lock, what else could it be? In Windows 7 under your Settings the last option is Feedback. This sends feedback to Microsoft to “help improve Windows Phone”. On this page you have three options: Send feedback and use my cellular data connection Send feedback and (presumably) use my WiFi connection Don’t send feedback Knowing what I know about Microsoft, they do use the feedback data. For example some of the placement and inclusion of features in Office 2007 was based on that Feedback data that Office sends (assuming you had opted in). However in the Privacy Statement (it’s long but a good read at least once in your life), the Phone manual, and every other source I could look at there is no information about how much data it’s planning to send, just that it’s sending some data and that “some data charges with your carrier may apply”. Looking back at the logs, I have to wonder. 6MB at 3:30 and *then* 7MB the next hour. That’s a lot of information. And it adds up. 50MB in a 24 hour period X 30 days puts most people over a normal 1GB plan. And frankly why am I paying for a data plan only to have 80% of it chewed up by Microsoft, with no real benefit to me. If they included porn in the 50mb daily transfer I’d be okay with this, but I don’t see any new movies on my phone. So I turned it off. Set Feedback to disabled and wait. I waited. And waited. And generally didn’t use the phone if I could. The next day I went back to look at the data usage logs from the time period after turning the feedback mechanism off. Here are the results. Timestamp Data Usage 1:19:48 PM 0 Kilobytes 2:19:48 PM 0 Kilobytes 3:19:48 PM 0 Kilobytes 4:19:48 PM 678 Kilobytes (took a phone call) 5:19:48 PM 82 Kilobytes 6:19:48 PM 88 Kilobytes 7:20:30 PM 86 Kilobytes (guess they changed their reporting time) 8:20:30 PM 86 Kilobytes 9:20:30 PM 66 Kilobytes 10:20:30 PM 67 Kilobytes 11:20:30 PM 49 Kilobytes 12:20:30 AM 32 Kilobytes 1:20:30 AM 38 Kilobytes 2:20:31 AM 18 Kilobytes 3:20:31 AM 27 Kilobytes 4:20:31 AM 86 Kilobytes 5:20:31 AM 53 Kilobytes 6:20:31 AM 22 Kilobytes 7:22:15 AM 30 Kilobytes (another reporting time change) 8:22:15 AM 29 Kilobytes 9:22:15 AM 74 Kilobytes 10:22:15 AM 154 Kilobytes (phone call) 11:22:15 AM 12 Kilobytes 12:13:27 PM 49 Kilobytes 1:13:27 PM 197 Kilobytes (phone call) Quite a *drastic* change from what Feedback was turned on. I mean for a 24 hour period (sans 3 phone calls) I consumed about 1MB. Still quite a bit of transfer going on but at least it amounts to 30MB per month, not 30MB per day! Like I said this observation is neither scientific or conclusive. You decide what to do but frankly until Microsoft makes this data transfer exempt from your data plan (like that will happen) I would just turn Feedback off. YMMV.

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

  • Breaking up the Workday– Overcoming the Workaholic Syndrome

    - by dwahlin
    Hi, my name’s Dan Wahlin and I’m a workaholic – I admit it. It’s good from the standpoint that I get a lot done but it also has a lot of cons associated with it as well that I’m not proud of. I literally can’t watch TV without feeling like I should be doing something more productive (although I have no problem going to see movies at a theater or watching sporting events – that’s my escape I guess). On vacation it’s sometimes difficult the first few days to just “let go” of work and enjoy the time with my family. I always feel like I should be checking email and following up with different business projects. Fortunately, my wife knows me really well after 17 years of marriage and “gently” restricts my usage of laptops and other gadgets while we’re out. She also reminds me that constantly burying my face in gadgets just isn’t cool and shows a distinct lack of self control. On a given day I typically put in between 12 (at a minimum) up to 16-18 hours working on projects. My company does .NET consulting (ASP.NET/jQuery, SharePoint and Silverlight) but we also do a lot in the training space so there’s always a client project, some new courseware or some other deliverable that has to be worked on. My normal process for handling that is to just work my butt off and see how much I can get done. That process has worked well for a long time but when you start realizing that your happiness comes from how much work you accomplished that day then you have a problem. That’s especially true if you have kids (which I do….two awesome boys). It’s almost as if working more hours feels like I’m more successful or something which is of course ridiculous. It may actually mean that I’m too distracted or disorganized. Lately I’ve realized that while I’m still productive and always meet my deadlines, I’m really burnt out by the afternoon and have lost some of the excitement I used to have. Part of that’s normal I think given that I’ve been doing this for close to 15 years now, but in thinking through it more I realized that I just need to get away from the desk and take a break. By far, the happiest time of my life was my childhood. Part of that was due to having awesome parents, having far less responsibility (a big factor I suspect), being able to hang-out with friends at school, playing sports, games, etc. but I think a big part of the overall happiness came from being outside a lot. I lived on my bike as a little kid and as I grew up I shared time between riding an ATV all over the place, shooting hoops on the basketball court, playing golf and working on a golf course (all outside work of course).  Being a software developer and trainer I generally spend 95% or more of my day indoors and only see the sun when driving from place to place or by looking out the window (that’s sad because I live in a suburb of Phoenix, AZ where it’s nearly always sunny). I haven’t looked into any scientific studies on the matter, but I’d be willing to bet there’s a direct correlation between overall productivity/happiness and being outside some throughout the day (sunny or not). But, I wasn’t sure what to do about it since I do have a lot of deadlines I need to meet after all. While talking with my wife last night I mentioned how I feel like I’m in a rut and want to get the “fun” back that I used to have. She immediately said that I need to start making time for breaks (a real quick fact – she’s a lot smarter than me and nearly always right). Of course my first thought was that I’d be less productive taking breaks. If I spend 2 hours just relaxing then I’m losing 2 hours of work. But, I thought about it more and realized that I’m probably less productive when I work 10+ hours and only take less than 30 minutes for a lunch break to relax a little. I bet my brain is screaming, “Please let me relax a little so I can figure out these problems you’re trying to resolve!”. So, starting today I’m going to try to break the workaholic habit and spend time outside of the office. That could mean sitting around outside, working out, golfing, or whatever. I’ve decided that no gadgets are allowed during that time and that I shouldn’t work for more than 4 hours straight without taking a break. I have no idea how my little “break the workaholic syndrome” experiment will go or how long it will last, but I’d be very interested in hearing from others on how they keep fresh and focused without working yourself to death. If you have any specific ideas, techniques or practices you follow please share them. There’s a lot more to life than work and some of us (and I’m thinking of myself specifically) need to take a long, hard look at what kind of balance we currently have. I’d hate to look back at my life when I’m 80 years old and say, “The only thing I did was work – I missed out on life!”.

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

  • What is in your Mathematica tool bag?

    - by Timo
    We all know that Mathematica is great, but it also often lacks critical functionality. What kind of external packages / tools / resources do you use with Mathematica? I'll edit (and invite anyone else to do so too) this main post to include resources which are focused on general applicability in scientific research and which as many people as possible will find useful. Feel free to contribute anything, even small code snippets (as I did below for a timing routine). Also, undocumented and useful features in Mathematica 7 and beyond you found yourself, or dug up from some paper/site are most welcome. Please include a short description or comment on why something is great or what utility it provides. If you link to books on Amazon with affiliate links please mention it, e.g., by putting your name after the link. Packages: LevelScheme is a package that greatly expands Mathematica's capability to produce good looking plots. I use it if not for anything else then for the much, much improved control over frame/axes ticks. David Park's Presentation Package ($50 - no charge for updates) Tools: MASH is Daniel Reeves's excellent perl script essentially providing scripting support for Mathematica 7. (This is finally built in as of Mathematica 8 with the -script option.) Resources: Wolfram's own repository MathSource has a lot of useful if narrow notebooks for various applications. Also check out the other sections such as Current Documentation, Courseware for lectures, and Demos for, well, demos. Books: Mathematica programming: an advanced introduction by Leonid Shifrin (web, pdf) is a must read if you want to do anything more than For loops in Mathematica. Quantum Methods with Mathematica by James F. Feagin (amazon) The Mathematica Book by Stephen Wolfram (amazon) (web) Schaum's Outline (amazon) Mathematica in Action by Stan Wagon (amazon) - 600 pages of neat examples and goes up to Mathematica version 7. Visualization techniques are especially good, you can see some of them on the author's Demonstrations Page. Mathematica Programming Fundamentals by Richard Gaylord (pdf) - A good concise introduction to most of what you need to know about Mathematica programming. Undocumented (or scarcely documented) Features: How to customize Mathematica keyboard shortcuts. See this question. How to inspect patterns and functions used by Mathematica's own functions. See this answer How to achieve Consistent size for GraphPlots in Mathematica? See this question.

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • Opening HTML Table in Excel - numbers are getting changed

    - by nickfranceschina
    so I have this HTML table with a bunch of big numbers in it that I want to open in Excel 2007... you can follow along at home: <table> <tr> <td>this is a big number</td> </tr><tr> <td>1111111</td> </tr><tr> <td>2335322864</td> </tr><tr> <td>23353228641</td> </tr><tr> <td>233532286418</td> </tr><tr> <td>2335322864187</td> </tr><tr> <td>23353228641877</td> </tr><tr> <td>233532286418777</td> </tr><tr> <td>2335322864187774</td> </tr><tr> <td>23353228641877745</td> </tr><tr> <td>233532286418777456</td> </tr><tr> <td>2335322864187774562</td> </tr><tr> <td>23353228641877745623</td> </tr><tr> <td>233532286418777456238</td> </tr> when I open this file in Excel it starts converting those numbers to scientific notation when they get over 10 digits in length... and in doing so, it starts changing the actual number and replaces least significant digits to zeros how can I tell Excel not to do this?

    Read the article

  • Problem getting correct parameters for C# P/Invoke call to C++ dll

    - by Jim Jones
    Trying to Interop a functionality from the Outside In API from Oracle. Have the following function: SCCERR EXOpenExport {VTHDOC hDoc, VTDWORD dwOutputId, VTDWORD dwSpecType, VTLPVOID pSpec, VTDWORD dwFlags, VTSYSPARAM dwReserved, VTLPVOID pCallbackFunc, VTSYSPARAM dwCallbackData, VTLPHEXPORT phExport); From the header files I reduced the parameters to: typedef VTSYSPARAM VTHDOC, VTLPHDOC * typedef DWORD_PTR VTSYSPARAM typedef unsigned long DWORD_PTR typedef unsigned long VTDWORD typedef VTVOID* VTLPVOID #define VTVOID void typedef VTHDOC VTHEXPORT, *VTLPEXPORT These are for 32 bit windows Going through the header files, the example programs, and the documentation I found: 1. That pSpec could be a pointer to a buffer or NULL, so I set it to a IntPtr.Zero (documentation). 2. That dwFlags and dwReserved according to the documentation "Must be set by the developer to 0". 3. That pCallbackFunc can be set to NULL if I don't want to handle callbacks. 4. That the last two are based on structs that I wrote C# wrappers for using the [StructLayout(LayoutKind.Sequential)]. Then instatiated an instance and generated the parameters by first creating a IntPtr with Marshal.AllocHGlobal(Marshal.SizeOf(instance)), then getting the address value which is passed as a uint for dwCallbackData and a IntPtr for phExport. The final parameter list is as follows: 1. phDoc as a IntPtr which was loaded with an address by the DAOpenDocument function called before. 2. dwOutputId as uint set to 1535 which represents FI_JPEGFIF 3. dwSpecType as int set to 2 which represents IOTYPE_ANSIPATH 4. pSpec as an IntPtr.Zero where the output will be written 5. dwFlags as uint set to 0 as directed 6. dwReserved as uint set to 0 as directed 7. pCallbackFunc as IntPtr set to NULL as I will handle results 8. dwCallBackDate as uint the address of a buffer for a struct 9. phExport as IntPtr to another struct buffer still get an undefined error from the API. Meaning that the call returns a 961 which is not defined in any of the header files. In the past I have gotten this when my choice of parameter types are incorrect. I started out using Interop Assistant which was helpful in learning how many of the parameter types get translated. It is however limited by how well I am able to glean the correct native type from the header files. For example the hDoc parameter used in the preceding function was defined as a non-filesytem handle, so attempted to use Marshal to create a handle, then used an IntPtr, and finally it turned out to be an int (actually it was &phDoc used here). So is there a more scientific way of doing this, other than trial and error? Jim

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • Re-order form fields on submit url

    - by user2521764
    I have a get form with several visible and hidden input fields. When the form is submitted, selected fileds with their values are appended to the url in the order they are placed in the form. Is there a way to re-order the parameters in the url using jQuery? Note that for the reasons of usability, I can not re-order the elements on the form itself. I know it beggs the question "why would I want to do it?", but the reason is that I will be hitting a static page, so the order of the parameters have to be exactly how they are in the static page url. For example, my form returns a url: http://someurl??names=comm&search=all&type=list while the static page has a url: http://someurl??search=all&type=list&names=comm A simplified form example is here: <form id="search_form" method="get" action="http://www.cbif.gc.ca/pls/pp/ppack.jump" > <h2>Choose which names you want to be displayed</h2> <select name="names"> <option value="comm">Common names</option> <option value="sci">Scientific names</option> </select> <h2>Choose how you want to view the results</h2> <input type="radio" name="search" value="all" id="complete" checked = "checked" /> <label for="complete" id="completeLabel">Complete list</label> <br/> <input type="radio" name="p_null" value="house" id="house" /> <label for="house" id="houseLabel">House plants only</label> <br/> <input type="radio" name="p_null" value="illust" id="illustrat" /> <label for="illustrat" id="illustratLabel">Plants with Illustrations</label> <br/> <input type="hidden" name="type" value="list" /> <input type="submit" value="Submit" /> </form> I can get form fields with values using $(#search_form).serializeArray() and massage the array like I want to, but I don't know how to set it back, i.e. modify the serialized values so that the submitted url has my order of parameters. I'm not even sure if this is the right way to go about it, so any pointers would be greatly appreciated.

    Read the article

  • OpenBSD configuration: Client unable to mount via NFS using Berkeley Automounter (amd)

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • OpenBSD configuration: Client unable to automount via NFS using amd

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

< Previous Page | 10 11 12 13 14 15 16  | Next Page >