Search Results

Search found 842 results on 34 pages for 'fuzzy purple monkey'.

Page 32/34 | < Previous Page | 28 29 30 31 32 33 34  | Next Page >

  • Android Drawable question.

    - by Tarmon
    Hey Everyone, I am trying to create a drawable in code and change the color based on some criteria. I can get it to work but it doesn't want to let me set the padding on the view. Any help would be appreciated. <?xml version="1.0" encoding="utf-8"?> <ImageView android:id="@+id/icon" android:layout_width="50px" android:layout_height="fill_parent" /> <TextView android:id="@+id/label" android:layout_width="wrap_content" android:layout_height="wrap_content" android:paddingLeft="17px" android:textSize="28sp" /> ImageView icon = (ImageView) row.findViewById(R.id.icon); ShapeDrawable mDrawable; int x = 0; int y = 0; int width = 50; int height = 50; float[] outerR = new float[] { 12, 12, 12, 12, 12, 12, 12, 12 }; mDrawable = new ShapeDrawable(new RoundRectShape(outerR, null, null)); mDrawable.setBounds(x, y+height, x + width, y); switch(position){ case 0: mDrawable.getPaint().setColor(0xffff0000); //Red break; case 1: mDrawable.getPaint().setColor(0xffff0000); //Red break; case 2: mDrawable.getPaint().setColor(0xff00c000); //Green break; case 3: mDrawable.getPaint().setColor(0xff00c000); //Green break; case 4: mDrawable.getPaint().setColor(0xff0000ff); //Blue break; case 5: mDrawable.getPaint().setColor(0xff0000ff); //Blue break; case 6: mDrawable.getPaint().setColor(0xff696969); //Gray break; case 7: mDrawable.getPaint().setColor(0xff696969); //Gray break; case 8: mDrawable.getPaint().setColor(0xffffff00); //Yellow break; case 9: mDrawable.getPaint().setColor(0xff8b4513); //Brown break; case 10: mDrawable.getPaint().setColor(0xff8b4513); //Brown break; case 11: mDrawable.getPaint().setColor(0xff8b4513); //Brown break; case 12: mDrawable.getPaint().setColor(0xffa020f0); //Purple break; case 13: mDrawable.getPaint().setColor(0xffff0000); //Red break; case 14: mDrawable.getPaint().setColor(0xffffd700); //Gold break; case 15: mDrawable.getPaint().setColor(0xffff6600); //Orange break; } icon.setBackgroundDrawable(mDrawable); icon.setPadding(5, 5, 5, 5); If I set the padding in XML it just ignores it. Thanks, Rob

    Read the article

  • PHP-based LaTeX parser -- where to begin?

    - by Alex Basson
    The project: I want to build a LaTeX-to-MathML translator in PHP. Why? Because I'm a mathematician, and I want to publish math on my Drupal site. It doesn't have to translate all of LaTeX, since the basic document-level stuff is ably handled by the CMS and wouldn't be written in LaTeX to begin with; it just has to translate math written in LaTeX into math written in MathML. Although I feel as though I've done my due diligence, this doesn't seem to exist already. Maybe I'm wrong---if you know of something that would serve this purpose, by all means let me know, and thank you in advance. But assuming it doesn't exist, I guess I have to go write it myself. Here's the thing, though: I've never done anything this ambitious. I don't really know where to begin. I've used PHP for years, but just to do the standard "build a CMS with PHP and MySQL"-type of stuff. I've never attempted anything as seemingly sophisticated as translation from one language to another. I'm just dumb enough to consider doing it with regex---after all, LaTeX is a much more formal language, and it doesn't allow for nearly the kinds of pathological edge-cases, as say, HTML. But on the other hand, I'm just smart enough to realize this is probably a terrible idea: now I have two problems, and I sure don't want to end up like this guy. So if that's not the way to go (right?), what is? How should I start thinking about this problem? Am I essentially writing a LaTeX compiler in PHP, and if so, what do I need to know to do that (like, should I just go read the Purple Dragon book first?)? I'm both really excited and pretty intimidated by the prospect of this project, but hey, this is how we all learn to be programmers, right? If something we need doesn't exist, we go and build it, necessity is the mother of... you get the point. Tremendous thanks to everyone in advance for any and all guidance you can offer.

    Read the article

  • ASP.NET Client to Server communication

    - by Nelson
    Can you help me make sense of all the different ways to communicate from browser to client in ASP.NET? I have made this a community wiki so feel free to edit my post to improve it. Specifically, I'm trying to understand in which scenario to use each one by listing how each works. I'm a little fuzzy on UpdatePanel vs CallBack (with ViewState): I know UpdatePanel always returns HTML while CallBack can return JSON. Any other major differences? ...and CallBack (without ViewState) vs WebMethod. CallBack goes through most of the Page lifecycle, WebMethod doesn't. Any other major differences? IHttpHandler Custom handler for anything (page, image, etc.) Only does what you tell it to do (light server processing) Page is an implementation of IHttpHandler If you don't need what Page provides, create a custom IHttpHandler If you are using Page but overriding Render() and not generating HTML, you probably can do it with a custom IHttpHandler (e.g. writing binary data such as images) By default can use the .axd or .ashx file extensions -- both are functionally similar .ashx doesn't have any built-in endpoints, so it's preferred by convention Regular PostBack (System.Web.UI.Page : IHttpHandler) Inherits Page Full PostBack, including ViewState and HTML control values (heavy traffic) Full Page lifecycle (heavy server processing) No JavaScript required Webpage flickers/scrolls since everything is reloaded in browser Returns full page HTML (heavy traffic) UpdatePanel (Control) Control inside Page Full PostBack, including ViewState and HTML control values (heavy traffic) Full Page lifecycle (heavy server processing) Controls outside the UpdatePanel do Render(NullTextWriter) Must use ScriptManager If no client-side JavaScript, it can fall back to regular PostBack with no JavaScript (?) No flicker/scroll since it's an async call, unless it falls back to regular postback. Can be used with master pages and user controls Has built-in support for progress bar Returns HTML for controls inside UpdatePanel (medium traffic) Client CallBack (Page, System.Web.UI.ICallbackEventHandler) Inherits Page Most of Page lifecycle (heavy server processing) Takes only data you specify (light traffic) and optionally ViewState (?) (medium traffic) Client must support JavaScript and use ScriptManager No flicker/scroll since it's an async call Can be used with master pages and user controls Returns only data you specify in format you specify (e.g. JSON, XML...) (?) (light traffic) WebMethod Class implements System.Web.Service.WebService HttpContext available through this.Context Takes only data you specify (light traffic) Server only runs the called method (light server processing) Client must support JavaScript No flicker/scroll since it's an async call Can be used with master pages and user controls Returns only data you specify, typically JSON (light traffic) Can create instance of server control to render HTML and sent back as string, but events, paging in GridView, etc. won't work PageMethods Essentially a WebMethod contained in the Page class, so most of WebMethod's bullet's apply Method must be public static, therefore no Page instance accessible HttpContext available through HttpContext.Current Accessed directly by URL Page.aspx/MethodName (e.g. with XMLHttpRequest directly or with library such as jQuery) Setting ScriptManager property EnablePageMethods="True" generates a JavaScript proxy for each WebMethod Cannot be used directly in user controls with master pages and user controls Any others?

    Read the article

  • SET game odds simulation (MATLAB)

    - by yuk
    Here is an interesting problem for your weekend. :) I recently find the great card came - SET. Briefly, there are 81 cards with the four features: symbol (oval, squiggle or diamond), color (red, purple or green), number (one, two or three) or shading (solid, striped or open). The task is to find (from selected 12 cards) a SET of 3 cards, in which each of the four features is either all the same on each card or all different on each card (no 2+1 combination). In my free time I've decided to code it in MATLAB to find a solution and to estimate odds of having a set in randomly selected cards. Here is the code: %% initialization K = 12; % cards to draw NF = 4; % number of features (usually 3 or 4) setallcards = unique(nchoosek(repmat(1:3,1,NF),NF),'rows'); % all cards: rows - cards, columns - features setallcomb = nchoosek(1:K,3); % index of all combinations of K cards by 3 %% test tic NIter=1e2; % number of test iterations setexists = 0; % test results holder % C = progress('init'); % if you have progress function from FileExchange for d = 1:NIter % C = progress(C,d/NIter); % cards for current test setdrawncardidx = randi(size(setallcards,1),K,1); setdrawncards = setallcards(setdrawncardidx,:); % find all sets in current test iteration for setcombidx = 1:size(setallcomb,1) setcomb = setdrawncards(setallcomb(setcombidx,:),:); if all(arrayfun(@(x) numel(unique(setcomb(:,x))), 1:NF)~=2) % test one combination setexists = setexists + 1; break % to find only the first set end end end fprintf('Set:NoSet = %g:%g = %g:1\n', setexists, NIter-setexists, setexists/(NIter-setexists)) toc 100-1000 iterations are fast, but be careful with more. One million iterations takes about 15 hours on my home computer. Anyway, with 12 cards and 4 features I've got around 13:1 of having a set. This is actually a problem. The instruction book said this number should be 33:1. And it was recently confirmed by Peter Norvig. He provides the Python code, but I didn't test it. So can you find an error?

    Read the article

  • Assembly Load and loading the "sub-modules" dependencies - "cannot find the file specified"

    - by Ted
    There are several questions out there that ask the same question. However the answers they received I cannot understand, so here goes: Similar questions: http://stackoverflow.com/questions/1874277/dynamically-load-assembly-and-manually-force-path-to-get-referenced-assemblies ; http://stackoverflow.com/questions/22012/loading-assemblies-and-its-dependencies-closed The question in short: I need to figure out how dependencies, ie References in my modules can be loaded dynamically. Right now I am getting "The system cannot find the file specified" on Assemblies referenced in my so called modules. I cannot really get how to use the AssemblyResolve event... The longer version I have one application, MODULECONTROLLER, that loads separate modules. These "separate modules" are located in well-known subdirectories, like appBinDir\Modules\Module1 appBinDir\Modules\Module2 Each directory contains all the DLLs that exists in the bin-directory of those projects after a build. So the MODULECONTROLLER loads all the DLLs contained in those folders using this code: byte[] bytes = File.ReadAllBytes(dllFileFullPath); Assembly assembly = null; assembly = Assembly.Load(bytes); I am, as you can see, loading the byte[]-array (so I dont lock the DLL-files). Now, in for example MODULE1, I have a static reference called MyGreatXmlProtocol. The MyGreatXmlProtocol.dll then also exists in the directory appBinDir\Modules\Module1 and is loaded using the above code When code in the MODULE1 tries to use this MyGreatXmlProtocol, I get: Could not load file or assembly 'MyGreatXmlProtocol, Version=1.0.3797.26527, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. So, in a post (like this one) they say that To my understanding reflection will load the main assembly and then search the GAC for the referenced assemblies, if it cannot find it there, you can then incorparate an assemblyResolve event: First; is it really needed to use the AssemblyResolve-event to make this work? Shouldnt my different MODULEs themself load their DLLs, as they are statically referenced? Second; if AssemblyResolve is the way to go - how do I use it? I have attached a handler to the Event but I never get anything on MyGreatXmlProctol... === EDIT === CODE regarding the AssemblyResolve-event handler: public GUI() { InitializeComponent(); AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); ... } // Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args) { Console.WriteLine(args.Name); return null; } Hope I wasnt too fuzzy =) Thx

    Read the article

  • What is the difference between NULL in C++ and null in Java?

    - by Stephano
    I've been trying to figure out why C++ is making me crazy typing NULL. Suddenly it hits me the other day; I've been typing null (lower case) in Java for years. Now suddenly I'm programming in C++ and that little chunk of muscle memory is making me crazy. Wikiperipatetic defines C++ NULL as part of the stddef: A macro that expands to a null pointer constant. It may be defined as ((void*)0), 0 or 0L depending on the compiler and the language. Sun's docs tells me this about Java's "null literal": The null type has one value, the null reference, represented by the literal null, which is formed from ASCII characters. A null literal is always of the null type. So this is all very nice. I know what a null pointer reference is, and thank you for the compiler notes. Now I'm a little fuzzy on the idea of a literal in Java so I read on... A literal is the source code representation of a fixed value; literals are represented directly in your code without requiring computation. There's also a special null literal that can be used as a value for any reference type. null may be assigned to any variable, except variables of primitive types. There's little you can do with a null value beyond testing for its presence. Therefore, null is often used in programs as a marker to indicate that some object is unavailable. Ok, so I think I get it now. In C++ NULL is a macro that, when compiled, defines the null pointer constant. In Java, null is a fixed value that any non-primitive can be assigned too; great for testing in a handy if statement. Java does not have pointers, so I can see why they kept null a simple value rather than anything fancy. But why did java decide to change the all caps NULL to null? Furthermore, am I missing anything here?

    Read the article

  • Assembly Load and loading the "sub-modules" dependencies - "cannot fild the file specified"

    - by Ted
    There are several questions out there that ask the same question. However the answers they received I cannot understand, so here goes: Similar questions: http://stackoverflow.com/questions/1874277/dynamically-load-assembly-and-manually-force-path-to-get-referenced-assemblies ; http://stackoverflow.com/questions/22012/loading-assemblies-and-its-dependencies-closed The question in short: I need to figure out how dependencies, ie References in my modules can be loaded dynamically. Right now I am getting "The system cannot find the file specified" on Assemblies referenced in my so called modules. I cannot really get how to use the AssemblyResolve event... The longer version I have one application, MODULECONTROLLER, that loads separate modules. These "separate modules" are located in well-known subdirectories, like appBinDir\Modules\Module1 appBinDir\Modules\Module2 Each directory contains all the DLLs that exists in the bin-directory of those projects after a build. So the MODULECONTROLLER loads all the DLLs contained in those folders using this code: byte[] bytes = File.ReadAllBytes(dllFileFullPath); Assembly assembly = null; assembly = Assembly.Load(bytes); I am, as you can see, loading the byte[]-array (so I dont lock the DLL-files). Now, in for example MODULE1, I have a static reference called MyGreatXmlProtocol. The MyGreatXmlProtocol.dll then also exists in the directory appBinDir\Modules\Module1 and is loaded using the above code When code in the MODULE1 tries to use this MyGreatXmlProtocol, I get: Could not load file or assembly 'MyGreatXmlProtocol, Version=1.0.3797.26527, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. So, in a post (like this one) they say that To my understanding reflection will load the main assembly and then search the GAC for the referenced assemblies, if it cannot find it there, you can then incorparate an assemblyResolve event: First; is it really needed to use the AssemblyResolve-event to make this work? Shouldnt my different MODULEs themself load their DLLs, as they are statically referenced? Second; if AssemblyResolve is the way to go - how do I use it? I have attached a handler to the Event but I never get anything on MyGreatXmlProctol... === EDIT === CODE regarding the AssemblyResolve-event handler: public GUI() { InitializeComponent(); AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); ... } // Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args) { Console.WriteLine(args.Name); return null; } Hope I wasnt too fuzzy =) Thx

    Read the article

  • iproute2 not functioning ("RTNETLINK answers: Operation not supported")

    - by James Watt
    The command and error message: gtwy ~ # ip rule add from 64.251.23.186 table t1 RTNETLINK answers: Operation not supported Older article of the same problem, but it did not help me: http://forums.gentoo.org/viewtopic-t-696982-start-0-postdays-0-postorder-asc-highlight-.html I have looked on google at great lengths to try to find a solution. It seems that my kernel configuration is missing something? Any help or ideas would be appreciated. My system/kernel is: 2.6.36-gentoo-r5 #3 SMP Thu Jan 13 10:49:06 EST 2011 x86_64 Intel(R) Xeon(R) CPU X3220 @ 2.40GHz GenuineIntel GNU/Linux. I am posting this on SuperUser since this system is used as a workstation and this problem is unrelated to specific tasks that are handled exclusively by servers. iproute2 is installed: gtwy etc # emerge --search iproute2 Searching... [ Results for search key : iproute2 ] [ Applications found : 1 ] * sys-apps/iproute2 Latest version available: 2.6.35-r2 Latest version installed: 2.6.35-r2 Size of files: 378 kB Homepage: http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 Description: kernel routing and traffic control utilities License: GPL-2 A small snippet of my kernel .config (view entire .config): gtwy linux # cat .config | grep NETLINK CONFIG_NETFILTER_NETLINK=y CONFIG_NETFILTER_NETLINK_QUEUE=y CONFIG_NETFILTER_NETLINK_LOG=y CONFIG_NF_CT_NETLINK=y CONFIG_SCSI_NETLINK=y gtwy linux # cat .config | grep IP_ADVANCED_ROUTER CONFIG_IP_ADVANCED_ROUTER=y gtwy linux # cat .config | grep INGRESS CONFIG_NET_SCH_INGRESS=y gtwy linux # cat .config | grep NET_SCHED CONFIG_NET_SCHED=y emerge --info Portage 2.1.9.25 (default/linux/amd64/10.0, gcc-4.1.2, glibc-2.10.1-r1, 2.6.36-gentoo-r5 x86_64) ================================================================= System uname: Linux-2.6.36-gentoo-r5-x86_64-Intel-R-_Xeon-R-_CPU_X3220_@_2.40GHz-with-gentoo-1.12.13 Timestamp of tree: Thu, 13 Jan 2011 01:15:01 +0000 app-shells/bash: 4.0_p37 dev-java/java-config: 1.3.7-r1, 2.1.10 dev-lang/python: 2.4.6, 2.5.4-r4, 2.6.5-r2, 3.1.2-r3 sys-apps/baselayout: 1.12.13 sys-apps/sandbox: 1.6-r2 sys-devel/autoconf: 2.13, 2.65 sys-devel/automake: 1.9.6-r2::<unknown repository>, 1.10.2, 1.11.1 sys-devel/binutils: 2.20.1-r1 sys-devel/gcc: 4.1.2, 4.3.4, 4.4.3-r2 sys-devel/gcc-config: 1.4.1 sys-devel/libtool: 2.2.6b sys-devel/make: 3.81 virtual/os-headers: 2.6.30-r1 (sys-kernel/linux-headers) ACCEPT_KEYWORDS="amd64" ACCEPT_LICENSE="*" CBUILD="x86_64-pc-linux-gnu" CFLAGS="-march=nocona -O2 -pipe" CHOST="x86_64-pc-linux-gnu" CONFIG_PROTECT="/etc /var/bind" CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/env.d/java/ /etc/fonts/fonts.conf /etc/gconf /etc/php/apache2-php5/ext-active/ /etc/php/cgi-php5/ext-active/ /etc/php/cli-php5/ext-active/ /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo" CXXFLAGS="-march=nocona -O2 -pipe" DISTDIR="/usr/portage/distfiles" FEATURES="assume-digests binpkg-logs distlocks fixlafiles fixpackages news parallel-fetch protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch" GENTOO_MIRRORS="http://gentoo.chem.wisc.edu/gentoo" LC_ALL="en_US.UTF-8" LDFLAGS="-Wl,-O1 -Wl,--as-needed" LINGUAS="en" MAKEOPTS="-j5" PKGDIR="/usr/portage/packages" PORTAGE_CONFIGROOT="/" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --stats --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages" PORTAGE_TMPDIR="/var/tmp" PORTDIR="/usr/portage" PORTDIR_OVERLAY="/usr/local/portage" SYNC="rsync://rsync.namerica.gentoo.org/gentoo-portage" USE="acl amd64 apache2 berkdb bzip2 cli cracklib crypt ctype cups curl cxx dri fortran gdbm gpm iconv jpeg jpeg2k libwww mmx modules mudflap multilib mysql ncurses nls nptl nptlonly openmp pam pcre perl php png pppd python readline session sockets sse sse2 ssl symlink sysfs tcpd threads unicode vhosts xml xorg xsl zlib" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" ALSA_PCM_PLUGINS="adpcm alaw asym copy dmix dshare dsnoop empty extplug file hooks iec958 ioplug ladspa lfloat linear meter mmap_emul mulaw multi null plug rate route share shm softvol" APACHE2_MODULES="actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ubx" INPUT_DEVICES="keyboard mouse evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LINGUAS="en" PHP_TARGETS="php5-3" RUBY_TARGETS="ruby18" USERLAND="GNU" VIDEO_CARDS="fbdev glint intel mach64 mga neomagic nouveau nv r128 radeon savage sis tdfx trident vesa via vmware dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account" Unset: CPPFLAGS, CTARGET, EMERGE_DEFAULT_OPTS, FFLAGS, INSTALL_MASK, LANG, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS

    Read the article

  • What free space thresholds/limits are advisable for 640 GB and 2 TB hard disk drives with ZEVO ZFS on OS X?

    - by Graham Perrin
    Assuming that free space advice for ZEVO will not differ from advice for other modern implementations of ZFS … Question Please, what percentages or amounts of free space are advisable for hard disk drives of the following sizes? 640 GB 2 TB Thoughts A standard answer for modern implementations of ZFS might be "no more than 96 percent full". However if apply that to (say) a single-disk 640 GB dataset where some of the files most commonly used (by VirtualBox) are larger than 15 GB each, then I guess that blocks for those files will become sub optimally spread across the platters with around 26 GB free. I read that in most cases, fragmentation and defragmentation should not be a concern with ZFS. Sill, I like the mental picture of most fragments of a large .vdi in reasonably close proximity to each other. (Do features of ZFS make that wish for proximity too old-fashioned?) Side note: there might arise the question of how to optimise performance after a threshold is 'broken'. If it arises, I'll keep it separate. Background On a 640 GB StoreJet Transcend (product ID 0x2329) in the past I probably went beyond an advisable threshold. Currently the largest file is around 17 GB –  – and I doubt that any .vdi or other file on this disk will grow beyond 40 GB. (Ignore the purple masses, those are bundles of 8 MB band files.) Without HFS Plus: the thresholds of twenty, ten and five percent that I associate with Mobile Time Machine file system need not apply. I currently use ZEVO Community Edition 1.1.1 with Mountain Lion, OS X 10.8.2, but I'd like answers to be not too version-specific. References, chronological order ZFS Block Allocation (Jeff Bonwick's Blog) (2006-11-04) Space Maps (Jeff Bonwick's Blog) (2007-09-13) Doubling Exchange Performance (Bizarre ! Vous avez dit Bizarre ?) (2010-03-11) … So to solve this problem, what went in 2010/Q1 software release is multifold. The most important thing is: we increased the threshold at which we switched from 'first fit' (go fast) to 'best fit' (pack tight) from 70% full to 96% full. With TB drives, each slab is at least 5GB and 4% is still 200MB plenty of space and no need to do anything radical before that. This gave us the biggest bang. Second, instead of trying to reuse the same primary slabs until it failed an allocation we decided to stop giving the primary slab this preferential threatment as soon as the biggest allocation that could be satisfied by a slab was down to 128K (metaslab_df_alloc_threshold). At that point we were ready to switch to another slab that had more free space. We also decided to reduce the SMO bonus. Before, a slab that was 50% empty was preferred over slabs that had never been used. In order to foster more write aggregation, we reduced the threshold to 33% empty. This means that a random write workload now spread to more slabs where each one will have larger amount of free space leading to more write aggregation. Finally we also saw that slab loading was contributing to lower performance and implemented a slab prefetch mechanism to reduce down time associated with that operation. The conjunction of all these changes lead to 50% improved OLTP and 70% reduced variability from run to run … OLTP Improvements in Sun Storage 7000 2010.Q1 (Performance Profiles) (2010-03-11) Alasdair on Everything » ZFS runs really slowly when free disk usage goes above 80% (2010-07-18) where commentary includes: … OpenSolaris has changed this in onnv revision 11146 … [CFT] Improved ZFS metaslab code (faster write speed) (2010-08-22)

    Read the article

  • WDS DHCP same server on Windows Server 2008

    - by Richard
    I have been struggling with a problem on my Windows Server 2008 for the past 4 - 5 hours and cannot figure out whats wrong. I have tried pretty much everything that I found on google and all the links are purple. Hopefully you guys can help me. I am running a Windows Server 2008 Standard edition with the latest updates as of today. Furthermore I am running a Windows Server 2003. Both are virtual machines on my ESXi 5 server. My network is: 192.168.10.0/24 W2k8: 192.168.10.251 is the PDC running ADS, DHCP and WDS W2k3: 192.168.10.253 AND 192.168.1.175 running Routing and Remote Access and ISA 2006 Enterprise In my internal network (192.168.10.0/24) I have my client machine (192.168.10.10) that runs a VMWare Workstation. I am trying to deploy Windows 7 Home Premium to a virtual machine on my VMWorkstation via PXE. I have set the Workstation's VM network adapter to "bridged" so that it uses the physical network adapter and is connected to my internal network. The DHCP pool is configured to give IP addresses from 192.168.10.10-192.168.10.15 (works for normal clients and is not used up) When I start my VM with the PXE I get the error: PXE-E52:proxyDHCP offers were received. No DHCP offers were received Apperently this means "that means that WDS responded but the DHCP server did not." People suggested to direct the traffic to both WDS and DHCP on the router, since everything is on the same subnet there is no need for that as the broadcast is seen by everyone (WDS and DHCP) No reservation for the virtual mac addrs is made on the DHCP. Furthermore it was suggested to configure the DHCP options: Option 60= PXEClient Option 66= WDS server name or IP address Option 67= Boot file name However, this is not recommended by Microsoft, I tried it and it did not solve my problem. The configuration on the WDS (My System is German therefore the actual naming might be different): PXE response tab: PXE responses is set to "ALL (known and unknown)" DHCP Tab: Do not listen to port 67 is NOT ticked - if I tick this I do not get any responses and the PXE errors gets PXE-E51 that neither DHCP or proxyDHCP were received DHCP-Option 60 for "PXEClient" is ticked The confusing part here is that it is advised in the tab to tick the first option since it is on the same server. Network Configuration Tab: Use the following IP-Address range for Multicast-IP-Address: 224.0.1.0 - 224.0.10.0 Thats not the default one, however it is in the allowed range. The UDP port range is the default since it is not advised to change them. I tried to change the "networkprofile" from 100mbits/1gbits and custom. I am running a 1gbit network with CAT6 cables and 1gbit netgear switch 5 ports. Everything is configured to use 1gbit. The WDS is authorised for the DHCP server. My ISA 2006 configuration: For the internal networking i have configured the following policy array: Allow protocols on internal network including the w2k3 host: 67,68,53,ICMP, 4011 UDP receive, 64001-6500 UDP send receive, 69 UDP send Routing and Remote Access I tried the DHCP relay agent configuration that was suggested as well, but that did not work I would highly appreciate anykind of help because I am pretty much done here with my nerves. Thank you very much in advance.

    Read the article

  • What *exactly* gets screwed when I kill -9 or pull the power?

    - by Mike
    Set-Up I've been a programmer for quite some time now but I'm still a bit fuzzy on deep, internal stuff. Now. I am well aware that it's not a good idea to either: kill -9 a process (bad) spontaneously pull the power plug on a running computer or server (worse) However, sometimes you just plain have to. Sometimes a process just won't respond no matter what you do, and sometimes a computer just won't respond, no matter what you do. Let's assume a system running Apache 2, MySQL 5, PHP 5, and Python 2.6.5 through mod_wsgi. Note: I'm most interested about Mac OS X here, but an answer that pertains to any UNIX system would help me out. My Concern Each time I have to do either one of these, especially the second, I'm very worried for a period of time that something has been broken. Some file somewhere could be corrupt -- who knows which file? There are over 1,000,000 files on the computer. I'm often using OS X, so I'll run a "Verify Disk" operation through the Disk Utility. It will report no problems, but I'm still concerned about this. What if some configuration file somewhere got screwed up. Or even worse, what if a binary file somewhere is corrupt. Or a script file somewhere is corrupt now. What if some hardware is damaged? What if I don't find out about it until next month, in a critical scenario, when the corruption or damage causes a catastrophe? Or, what if valuable data is already lost? My Hope My hope is that these concerns and worries are unfounded. After all, after doing this many times before, nothing truly bad has happened yet. The worst is I've had to repair some MySQL tables, but I don't seem to have lost any data. But, if my worries are not unfounded, and real damage could happen in either situation 1 or 2, then my hope is that there is a way to detect it and prevent against it. My Question(s) Could this be because modern operating systems are designed to ensure that nothing is lost in these scenarios? Could this be because modern software is designed to ensure that nothing lost? What about modern hardware design? What measures are in place when you pull the power plug? My question is, for both of these scenarios, what exactly can go wrong, and what steps should be taken to fix it? I'm under the impression that one thing that can go wrong is some programs might not have flushed their data to the disk, so any highly recent data that was supposed to be written to the disk (say, a few seconds before the power pull) might be lost. But what about beyond that? And can this very issue of 5-second data loss screw up a system? What about corruption of random files hiding somewhere in the huge forest of files on my hard drives? What about hardware damage? What Would Help Me Most Detailed descriptions about what goes on internally when you either kill -9 a process or pull the power on the whole system. (it seems instant, but can someone slow it down for me?) Explanations of all things that could go wrong in these scenarios, along with (rough of course) probabilities (i.e., this is very unlikely, but this is likely)... Descriptions of measures in place in modern hardware, operating systems, and software, to prevent damage or corruption when these scenarios occur. (to comfort me) Instructions for what to do after a kill -9 or a power pull, beyond "verifying the disk", in order to truly make sure nothing is corrupt or damaged somewhere on the drive. Measures that can be taken to fortify a computer setup so that if something has to be killed or the power has to be pulled, any potential damage is mitigated. Thanks so much!

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • Ten - oh, wait, eleven - Eleven things you should know about the ASP.NET Fall 2012 Update

    - by Jon Galloway
    Today, just a little over two months after the big ASP.NET 4.5 / ASP.NET MVC 4 / ASP.NET Web API / Visual Studio 2012 / Web Matrix 2 release, the first preview of the ASP.NET Fall 2012 Update is out. Here's what you need to know: There are no new framework bits in this release - there's no change or update to ASP.NET Core, ASP.NET MVC or Web Forms features. This means that you can start using it without any updates to your server, upgrade concerns, etc. This update is really an update to the project templates and Visual Studio tooling, conceptually similar to the ASP.NET MVC 3 Tools Update. It's a relatively lightweight install. It's a 41MB download. I've installed it many times and usually takes 5-7 minutes; it's never required a reboot. It adds some new project templates to ASP.NET MVC: Facebook Application and Single Page Application templates. It adds a lot of cool enhancements to ASP.NET Web API. It adds some tooling that makes it easy to take advantage of features like SignalR, Friendly URLs, and Windows Azure Authentication. Most of the new features are installed via NuGet packages. Since ASP.NET is open source, nightly NuGet packages are available, and the roadmap is published, most of this has really been publicly available for a while. The official name of this drop is the ASP.NET Fall 2012 Update BUILD Prerelease. Please do not attempt to say that ten times fast. While the EULA doesn't prohibit it, it WILL legally change your first name to Scott. As with all new releases, you can find out everything you need to know about the Fall Update at http://asp.net/vnext (especially the release notes!) I'm going to be showing all of this off, assisted by special guest code monkey Scott Hanselman, this Friday at BUILD: Bleeding edge ASP.NET: See what is next for MVC, Web API, SignalR and more… (and I've heard it will be livestreamed). Let's look at some of those things in more detail. No new bits ASP.NET 4.5, MVC 4 and Web API have a lot of great core features. I see the goal of this update release as making it easier to put those features to use to solve some useful scenarios by taking advantage of NuGet packages and template code. If you create a new ASP.NET MVC application using one of the new templates, you'll see that it's using the ASP.NET MVC 4 RTM NuGet package (4.0.20710.0): This means you can install and use the Fall Update without any impact on your existing projects and no worries about upgrading or compatibility. New Facebook Application Template ASP.NET MVC 4 (and ASP.NET 4.5 Web Forms) included the ability to authenticate your users via OAuth and OpenID, so you could let users log in to your site using a Facebook account. One of the new changes in the Fall Update is a new template that makes it really easy to create full Facebook applications. You could create Facebook application in ASP.NET already, you'd just need to go through a few steps: Search around to find a good Facebook NuGet package, like the Facebook C# SDK (written by my friend Nathan Totten and some other Facebook SDK brainiacs). Read the Facebook developer documentation to figure out how to authenticate and integrate with them. Write some code, debug it and repeat until you got something working. Get started with the application you'd originally wanted to write. What this template does for you: eliminate steps 1-3. Erik Porter, Nathan and some other experts built out the Facebook Application template so it automatically pulls in and configures the Facebook NuGet package and makes it really easy to take advantage of it in an ASP.NET MVC application. One great example is the the way you access a Facebook user's information. Take a look at the following code in a File / New / MVC / Facebook Application site. First, the Home Controller Index action: [FacebookAuthorize(Permissions = "email")] public ActionResult Index(MyAppUser user, FacebookObjectList<MyAppUserFriend> userFriends) { ViewBag.Message = "Modify this template to jump-start your Facebook application using ASP.NET MVC."; ViewBag.User = user; ViewBag.Friends = userFriends.Take(5); return View(); } First, notice that there's a FacebookAuthorize attribute which requires the user is authenticated via Facebook and requires permissions to access their e-mail address. It binds to two things: a custom MyAppUser object and a list of friends. Let's look at the MyAppUser code: using Microsoft.AspNet.Mvc.Facebook.Attributes; using Microsoft.AspNet.Mvc.Facebook.Models; // Add any fields you want to be saved for each user and specify the field name in the JSON coming back from Facebook // https://developers.facebook.com/docs/reference/api/user/ namespace MvcApplication3.Models { public class MyAppUser : FacebookUser { public string Name { get; set; } [FacebookField(FieldName = "picture", JsonField = "picture.data.url")] public string PictureUrl { get; set; } public string Email { get; set; } } } You can add in other custom fields if you want, but you can also just bind to a FacebookUser and it will automatically pull in the available fields. You can even just bind directly to a FacebookUser and check for what's available in debug mode, which makes it really easy to explore. For more information and some walkthroughs on creating Facebook applications, see: Deploying your first Facebook App on Azure using ASP.NET MVC Facebook Template (Yao Huang Lin) Facebook Application Template Tutorial (Erik Porter) Single Page Application template Early releases of ASP.NET MVC 4 included a Single Page Application template, but it was removed for the official release. There was a lot of interest in it, but it was kind of complex, as it handled features for things like data management. The new Single Page Application template that ships with the Fall Update is more lightweight. It uses Knockout.js on the client and ASP.NET Web API on the server, and it includes a sample application that shows how they all work together. I think the real benefit of this application is that it shows a good pattern for using ASP.NET Web API and Knockout.js. For instance, it's easy to end up with a mess of JavaScript when you're building out a client-side application. This template uses three separate JavaScript files (delivered via a Bundle, of course): todoList.js - this is where the main client-side logic lives todoList.dataAccess.js - this defines how the client-side application interacts with the back-end services todoList.bindings.js - this is where you set up events and overrides for the Knockout bindings - for instance, hooking up jQuery validation and defining some client-side events This is a fun one to play with, because you can just create a new Single Page Application and hit F5. Quick, easy install (with one gotcha) One of the cool engineering changes for this release is a big update to the installer to make it more lightweight and efficient. I've been running nightly builds of this for a few weeks to prep for my BUILD demos, and the install has been really quick and easy to use. The install takes about 5 minutes, has never required a reboot for me, and the uninstall is just as simple. There's one gotcha, though. In this preview release, you may hit an issue that will require you to uninstall and re-install the NuGet VSIX package. The problem comes up when you create a new MVC application and see this dialog: The solution, as explained in the release notes, is to uninstall and re-install the NuGet VSIX package: Start Visual Studio 2012 as an Administrator Go to Tools->Extensions and Updates and uninstall NuGet. Close Visual Studio Navigate to the ASP.NET Fall 2012 Update installation folder: For Visual Studio 2012: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio 2012 For Visual Studio 2012 Express for Web: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio Express 2012 for Web Double click on the NuGet.Tools.vsix to reinstall NuGet This took me under a minute to do, and I was up and running. ASP.NET Web API Update Extravaganza! Uh, the Web API team is out of hand. They added a ton of new stuff: OData support, Tracing, and API Help Page generation. OData support Some people like OData. Some people start twitching when you mention it. If you're in the first group, this is for you. You can add a [Queryable] attribute to an API that returns an IQueryable<Whatever> and you get OData query support from your clients. Then, without any extra changes to your client or server code, your clients can send filters like this: /Suppliers?$filter=Name eq ‘Microsoft’ For more information about OData support in ASP.NET Web API, see Alex James' mega-post about it: OData support in ASP.NET Web API ASP.NET Web API Tracing Tracing makes it really easy to leverage the .NET Tracing system from within your ASP.NET Web API's. If you look at the \App_Start\WebApiConfig.cs file in new ASP.NET Web API project, you'll see a call to TraceConfig.Register(config). That calls into some code in the new \App_Start\TraceConfig.cs file: public static void Register(HttpConfiguration configuration) { if (configuration == null) { throw new ArgumentNullException("configuration"); } SystemDiagnosticsTraceWriter traceWriter = new SystemDiagnosticsTraceWriter() { MinimumLevel = TraceLevel.Info, IsVerbose = false }; configuration.Services.Replace(typeof(ITraceWriter), traceWriter); } As you can see, this is using the standard trace system, so you can extend it to any other trace listeners you'd like. To see how it works with the built in diagnostics trace writer, just run the application call some API's, and look at the Visual Studio Output window: iisexpress.exe Information: 0 : Request, Method=GET, Url=http://localhost:11147/api/Values, Message='http://localhost:11147/api/Values' iisexpress.exe Information: 0 : Message='Values', Operation=DefaultHttpControllerSelector.SelectController iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=DefaultHttpControllerActivator.Create iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=HttpControllerDescriptor.CreateController iisexpress.exe Information: 0 : Message='Selected action 'Get()'', Operation=ApiControllerActionSelector.SelectAction iisexpress.exe Information: 0 : Operation=HttpActionBinding.ExecuteBindingAsync iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuting iisexpress.exe Information: 0 : Message='Action returned 'System.String[]'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync iisexpress.exe Information: 0 : Message='Will use same 'JsonMediaTypeFormatter' formatter', Operation=JsonMediaTypeFormatter.GetPerRequestFormatterInstance iisexpress.exe Information: 0 : Message='Selected formatter='JsonMediaTypeFormatter', content-type='application/json; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate iisexpress.exe Information: 0 : Operation=ApiControllerActionInvoker.InvokeActionAsync, Status=200 (OK) iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuted, Status=200 (OK) iisexpress.exe Information: 0 : Operation=ValuesController.ExecuteAsync, Status=200 (OK) iisexpress.exe Information: 0 : Response, Status=200 (OK), Method=GET, Url=http://localhost:11147/api/Values, Message='Content-type='application/json; charset=utf-8', content-length=unknown' iisexpress.exe Information: 0 : Operation=JsonMediaTypeFormatter.WriteToStreamAsync iisexpress.exe Information: 0 : Operation=ValuesController.Dispose API Help Page When you create a new ASP.NET Web API project, you'll see an API link in the header: Clicking the API link shows generated help documentation for your ASP.NET Web API controllers: And clicking on any of those APIs shows specific information: What's great is that this information is dynamically generated, so if you add your own new APIs it will automatically show useful and up to date help. This system is also completely extensible, so you can generate documentation in other formats or customize the HTML help as much as you'd like. The Help generation code is all included in an ASP.NET MVC Area: SignalR SignalR is a really slick open source project that was started by some ASP.NET team members in their spare time to add real-time communications capabilities to ASP.NET - and .NET applications in general. It allows you to handle long running communications channels between your server and multiple connected clients using the best communications channel they can both support - websockets if available, falling back all the way to old technologies like long polling if necessary for old browsers. SignalR remains an open source project, but now it's being included in ASP.NET (also open source, hooray!). That means there's real, official ASP.NET engineering work being put into SignalR, and it's even easier to use in an ASP.NET application. Now in any ASP.NET project type, you can right-click / Add / New Item... SignalR Hub or Persistent Connection. And much more... There's quite a bit more. You can find more info at http://asp.net/vnext, and we'll be adding more content as fast as we can. Watch my BUILD talk to see as I demonstrate these and other features in the ASP.NET Fall 2012 Update, as well as some other even futurey-er stuff!

    Read the article

  • My Code Kata–A Solution Kata

    - by Glav
    There are many developers and coders out there who like to do code Kata’s to keep their coding ability up to scratch and to practice their skills. I think it is a good idea. While I like the concept, I find them dead boring and of minimal purpose. Yes, they serve to hone your skills but that’s about it. They are often quite abstract, in that they usually focus on a small problem set requiring specific solutions. It is fair enough as that is how they are designed but again, I find them quite boring. What I personally like to do is go for something a little larger and a little more fun. It takes a little more time and is not as easily executed as a kata though, but it services the same purposes from a practice perspective and allows me to continue to solve some problems that are not directly part of the initial goal. This means I can cover a broader learning range and have a bit more fun. If I am lucky, sometimes they even end up being useful tools. With that in mind, I thought I’d share my current ‘kata’. It is not really a code kata as it is too big. I prefer to think of it as a ‘solution kata’. The code is on bitbucket here. What I wanted to do was create a kind of simplistic virtual world where I can create a player, or a class, stuff it into the world, and see if it survives, and can navigate its way to the exit. Requirements were pretty simple: Must be able to define a map to describe the world using simple X,Y co-ordinates. Z co-ordinates as well if you feel like getting clever. Should have the concept of entrances, exists, solid blocks, and potentially other materials (again if you want to get clever). A coder should be able to easily write a class which will act as an inhabitant of the world. An inhabitant will receive stimulus from the world in the form of surrounding environment and be able to make a decision on action which it passes back to the ‘world’ for processing. At a minimum, an inhabitant will have sight and speed characteristics which determine how far they can ‘see’ in the world, and how fast they can move. Coders who write a really bad ‘inhabitant’ should not adversely affect the rest of world. Should allow multiple inhabitants in the world. So that was the solution I set out to act as a practice solution and a little bit of fun. It had some interesting problems to solve and I figured, if it turned out ok, I could potentially use this as a ‘developer test’ for interviews. Ask a potential coder to write a class for an inhabitant. Show the coder the map they will navigate, but also mention that we will use their code to navigate a map they have not yet seen and a little more complex. I have been playing with solution for a short time now and have it working in basic concepts. Below is a screen shot using a very basic console visualiser that shows the map, boundaries, blocks, entrance, exit and players/inhabitants. The yellow asterisks ‘*’ are the players, green ‘O’ the entrance, purple ‘^’ the exit, maroon/browny ‘#’ are solid blocks. The players can move around at different speeds, knock into each others, and make directional movement decisions based on what they see and who is around them. It has been quite fun to write and it is also quite fun to develop different players to inject into the world. The code below shows a really simple implementation of an inhabitant that can work out what to do based on stimulus from the world. It is pretty simple and just tries to move in some direction if there is nothing blocking the path. public class TestPlayer:LivingEntity { public TestPlayer() { Name = "Beta Boy"; LifeKey = Guid.NewGuid(); } public override ActionResult DecideActionToPerform(EcoDev.Core.Common.Actions.ActionContext actionContext) { try { var action = new MovementAction(); // move forward if we can if (actionContext.Position.ForwardFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.ForwardFacingPositions[0])) { action.DirectionToMove = MovementDirection.Forward; return action; } } if (actionContext.Position.LeftFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.LeftFacingPositions[0])) { action.DirectionToMove = MovementDirection.Left; return action; } } if (actionContext.Position.RearFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RearFacingPositions[0])) { action.DirectionToMove = MovementDirection.Back; return action; } } if (actionContext.Position.RightFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RightFacingPositions[0])) { action.DirectionToMove = MovementDirection.Right; return action; } } return action; } catch (Exception ex) { World.WriteDebugInformation("Player: "+ Name, string.Format("Player Generated exception: {0}",ex.Message)); throw ex; } } private bool CheckAccessibilityOfMapBlock(MapBlock block) { if (block == null || block.Accessibility == MapBlockAccessibility.AllowEntry || block.Accessibility == MapBlockAccessibility.AllowExit || block.Accessibility == MapBlockAccessibility.AllowPotentialEntry) { return true; } return false; } } It is simple and it seems to work well. The world implementation itself decides the stimulus context that is passed to he inhabitant to make an action decision. All movement is carried out on separate threads and timed appropriately to be as fair as possible and to cater for additional skills such as speed, and eventually maybe stamina, strength, with actions like fighting. It is pretty fun to make up random maps and see how your inhabitant does. You can download the code from here. Along the way I have played with parallel extensions to make the compute intensive stuff spread across all cores, had to heavily factor in visibility of methods and properties so design of classes was paramount, work out movement algorithms that play fairly in the world and properly favour the players with higher abilities, as well as a host of other issues. So that is my ‘solution kata’. If I keep going with it, I may develop a web interface for it where people can upload assemblies and watch their player within a web browser visualiser and maybe even a map designer. What do you do to keep the fires burning?

    Read the article

  • NVIDIA x server - "sudo nvidia config" does not generate a working 'xorg.config'

    - by Mike
    I am over 18 hours deep on this challenge. I got to this point and am stuck. very stuck. Maybe you can figure it out? Ubuntu Version 12.04 LTS with all the updates installed. Problem: The default settings in "etc/X11/xorg.conf" that are generated by the "nvidia-xconfig" tool, do not allow the NVIDIA x server to connect to the driver in my "System Settings Additional Driver window". (that's how I understand it. Lots of information below). Symptoms of Problem "System Settings Additional Driver" window has drivers, but the nvidia x server cannot connect/utilize any of the 4 drivers. the drivers are activated, but not in use. When I go to "System Tools Administration NVIDIA x server settings" I get an error that basically tells me to create a default file to initialize the NVIDIA X server (screen shot below). This is the messages the terminal gives after running a "sudo nvidia-xconfig" command for the first time. It seems that the generated file by the tool i just ran is generating a bad/unusable file: If I run the "sudo nvidia-xconfig" command again, I wont get an error the second time. However when I reboot, the default file that is generated (etc/X11/xorg.conf) simply puts the screen resolution at 800 x 600 (or something big like that). When I try to go to NVIDIA x server settings I am greeted with the same screen as the screen shot as in symptom 2 (no option to change the resolution). If I try to go to "system settings display" there are no other resolutions to choose from. At this point I must delete the newly minted "xorg.conf" and reinstate the original in its place. Here are the contents of the "xorg.conf" that is generated first (the one missing required information): # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 304.88 (buildmeister@swio-display-x86-rhel47-06) Wed Mar 27 15:32:58 PDT 2013 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Hardware: I ran the "lspci|grep VGA". There results are: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) More Hardware info: Ram: 16GB CPU: Intel Core i7-2720QM @2.2GHz * 8 Other: 64 bit. This is a triple boot computer and not a VM. Attempts With Not Success on My End: 1) Tried to append the "xorg.conf" with what I perceive is missing information and obviously it didn't fly. 2) All the other stuff I tried got me to this point. 3) See if this link is helpful to you (I barely get it, but i get enough knowing that a smarter person might find this useful): http://manpages.ubuntu.com/manpages/lucid/man1/nvidia-xconfig.1.html 4) I am completely new to Linux (40 hours over past week), but not to programming. However I am very serious about changing over to Linux. When you respond (I hope someone responds...) please respond in a way that a person new to Linux can understand. 5) By the way, the reason I am in this mess is because I MUST have a second monitor running from my laptop, and "System Settings Display" doesn't recognize my second display. I know it is possible to make the second display work in my system, because when I boot from the install CD, I perform work on the native laptop monitor, but the second monitor shows a purple screen with Ubuntu in the middle, so I know the VGA port is sending a signal out. If this is too much for you to tackle please suggest an alternative method to get a second display. I don't want to go to windows but I cannot have a single display. I am really fudged here. I hope some smart person can help. Thanks in advance. Mike. **********************EDIT #1********************** More Details About Graphics Card I was asked "which brand of nvidia-card do you have exactly?" Here is what I did to provide more info (maybe relevant, maybe not, but here is everything): 1) Took my Lenovo W520 right apart to see if there is an identifier on the actual card. However I realized that if I get deep enough to take a look, the laptop "won't like it". so I put it back together. Figuring out the card this way is not an option for me right now. 2) (My computer is triple boot) I logged into Win7 and ran 'dxdiag' command. here is the screen shot: 3) I tried to look on the lenovo website for more details... but no luck. I took a look at my receipts and here is info form receipt: System Unit: W520 NVIDIA Quadro 1000M 2GB 4) In win7 I went to the NVIDIA website and used the option to have my card 'scanned' by a Java applet to determine the latest update for my card. I tried the same with Ubuntu but I can't get the applet to run. Here is the recommended driver from from the NVIDIA Applet for my card for Win7 (I hope this shines some light on the specifics of the card): Quadro/NVS/Tesla/GRID Desktop Driver Release R319 Version: 320.00 WHQL Release Date: 3.5.2013 5) Also I went on the NVIDIA driver search and looked through every possible combination of product type + product series + product to find all the combinations that yield a 1000M card. My card is: Product Type: Quadro Product Series: Quadro Series (Notebooks) Product: 1000M ***********************EDIT #2******************* Additional Symptoms Another question that generated more symptoms I previously didn't mention was: "After generating xorg.conf by nvidia-xconfig, go to additional drivers, do you see nvidia-304?" 1) I took a screen shot of the "additional drivers" right after generating xorg.conf by nvidia-xconfig. Here it is: 2) Then I did a reboot. Now Ubuntu is 600 x 800 resolution. When I logged in after the computer came up I got an error (which I always get after generating xorg.conf by nvidia-xconfig and rebooting) 3) To finally answer the question - No. There is no "NVIDIA-304" driver. Screen shot of additional drivers after generating xorg.conf by nvidia-xconfig and rebooting : At this point I revert to the original xorg.conf and delete the xorg.conf generated by Nvidia.

    Read the article

  • CodePlex Daily Summary for Friday, November 09, 2012

    CodePlex Daily Summary for Friday, November 09, 2012Popular ReleasesPlayer Framework by Microsoft: Player Framework for Windows 8 (v1.0): IMPORTANT: List of breaking changes from preview 7 Ability to move control panel or individual elements outside media player. more info... New Entertainment app theme for out of the box support for Windows 8 Entertainment app guidelines. more info... VSIX reference names shortened. Allows seeing plugin name from "Add Reference" dialog without resizing. FreeWheel SmartXML now supports new "Standard" event callback type. Other minor misc fixes and improvements ADDITIONAL DOWNLOADSSmo...WebSearch.Net: WebSearch.Net 3.1: WebSearch.Net is an open-source research platform that provides uniform data source access, data modeling, feature calculation, data mining, etc. It facilitates the experiments of web search researchers due to its high flexibility and extensibility. The platform can be used or extended by any language compatible for .Net 2 framework, from C# (recommended), VB.Net to C++ and Java. Thanks to the large coverage of knowledge in web search research, it is necessary to model the techniques and main...Umbraco CMS: Umbraco 4.10.0: NugetNuGet BlogRead the release blog post for 4.10.0. Whats newMVC support New request pipeline Many, many bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesWe have done all we can not to break backwards compatibility, but we had to do some minor breaking changes: Removed graphicHeadlineFormat config setting from umbracoSettings.config (an old relic from the 3.x days) U4-690 DynamicNode ChildrenAsList was fixed, altering it'...MySQL Tuner for Windows: 0.3: Welcome to the third beta of MySQL Tuner for Windows! This release fixes bugs in the displaying of numbers, and a crash that occurred due to the program incorrectly closing and disposing of resources, Be warned that there will be bugs in this release, so please do not use on production or critical systems. Do post details of issues found to the issue tracker, and I will endeavour to fix them, when I can. I would love to have your feedback, and if possible your support! Requirements Microso...SharePoint Manager 2013: SharePoint Manager 2013 Release ver 1.0.12.1106: SharePoint Manager 2013 Release (ver: 1.0.12.1106) is now ready for SharePoint 2013. The new version has an expanded view of the SharePoint object model and has been tested on SharePoint 2013 RTM. As a bonus, the new version is also available for SharePoint 2010 as a separate download.GL2DX (OpenGL to DirectX Wrapper Library): GL2DX Release 1: The first release contains all source code for the GL2DX library and source code for the sample project demonstrating how to use the library for a Direct3D / XAML hybrid Windows 8 application.WCF y Net remoting en el famoso ejemplo HOLA MUNDO: Ejemplo: Versión liberadaD3D9Client: D3D9Client R7: New release for Orbiter 2010-P1 - Added horizon/sun angle for night-lights into the configuration file (default 10deg) - Some runway lights related bugs are fixed - Added more configuration options for runway lightsFiskalizacija za developere: FiskalizacijaDev 1.2: Verzija 1.2. je, prije svega, odgovor na novu verziju Tehnicke specifikacije (v1.1.) koja je objavljena prije nekoliko dana. Pored novosti vezanih uz (sitne) izmjene u spomenutoj novoj verziji Tehnicke dokumentacije, projekt smo prošili sa nekim dodatnim feature-ima od kojih je vecina proizašla iz vaših prijedloga - hvala :) Novosti u v1.2. su: - Neusuglašenost zahtjeva (http://fiskalizacija.codeplex.com/workitem/645) - Sample projekt - iznosi se množe sa 100 (http://fiskalizacija.codeplex.c...MFCMAPI: October 2012 Release: Build: 15.0.0.1036 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeJayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.3: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.3 For detailed release notes check the release notes. TypeScript supportWrite your code in a ...MCEBuddy 2.x: MCEBuddy 2.3.7: Changelog for 2.3.7 (32bit and 64bit) 1. Improved performance of MP4 Fast and M4V Fast Profiles (no deinterlacing, removed --decomb) 2. Improved priority handling 3. Added support for Pausing and Resume conversions 4. Added support for fallback to source directory if network destination directory is unavailable 5. MCEBuddy now installs ShowAnalyzer during installation 6. Added support for long description atom in iTunesFoxyXLS: FoxyXLS Releases: Source code and samplesHTML Renderer: HTML Renderer 1.0.0.0 (3): Major performance improvement (http://theartofdev.wordpress.com/2012/10/25/how-i-optimized-html-renderer-and-fell-in-love-with-vs-profiler/) Minor fixes raised in issue tracker and discussions.Window Manager: Window Manager 1.0: First releaseProDinner - ASP.NET MVC Sample (EF4.4, N-Tier, jQuery): 8: update to ASP.net MVC Awesome 3.0 udpate to EntityFramework 4.4 update to MVC 4 added dinners grid on homepageASP.net MVC Awesome - jQuery Ajax Helpers: 3.0: added Grid helper added XML Documentation added textbox helper added Client Side API for AjaxList removed .SearchButton from AjaxList AjaxForm and Confirm helpers have been merged into the Form helper optimized html output for AjaxDropdown, AjaxList, Autocomplete works on MVC 3 and 4BlogEngine.NET: BlogEngine.NET 2.7: Cheap ASP.NET Hosting - $4.95/Month - Click Here!! Click Here for More Info Cheap ASP.NET Hosting - $4.95/Month - Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.7 instructions. If you looking for Web Application Project, ...Launchbar: Launchbar 4.2.2.0: This release is the first step in cleaning up the code and using all the latest features of .NET 4.5 Changes 4.2.2 (2012-11-02) Improved handling of left clicks 4.1.0 (2012-10-17) Removed tray icon Assembly renamed and signed with strong name Note When you upgrade, Launchbar will start with the default settings. You can import your previous settings by following these steps: Run Launchbar and just save the settings without configuring anything Shutdown Launchbar Go to the folder %LOCA...CommonLibrary.NET: CommonLibrary.NET 0.9.8.8: Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.8 - Final ReleaseApplication: FluentScript Version: 0.9.8.8 Build: 0.9.8.8 Changeset: 77368 ( CommonLibrary.NET ) Release date: November 2nd, 2012 Binaries: CommonLibrary.dll Namespace: ComLib.Lang Project site: http://fluentscript.codeplex.com/ Download: http://commonlibrarynet.codeplex.com/releases/view/90426 Source code: http://common...New Projects.NET Data Export Examples: This project is created to export data in C#,VB.NET from database,listview,command to PDF, Word,Excel,RTF,Html,XML,Access,DBF,SQL Script,SYLK,DIF,CSV,Clipboardb9b18a35-a80a-440c-bb8c-195be0225cfa: b9b18a35-a80a-440c-bb8c-195be0225cfaBudget Monkey: Personal Budgeting and Expense Tracking Application - ASP.Net MVC 4 - LINQ - Sql Server ClassLibrary5: ?0?????ORM ??:http://www.cnblogs.com/wushilonng/archive/2011/11/21/2257657.html ??????????????.CodeTextBox: CodeTextBox is a Windows Forms control for colorizing code while you are typing in the text box.Content Management System: web projectCopperfield: Copperfield is an extensibility framework built around contextual awareness. It is intended to be used in combination with one's DI / IoC framework of choice.Cyfuscator: This project show how create your own obfuscator for application .net, as a protection source codeData Browser: The Data Browser is a Windows 8 application that provides a great way to navigate and explore data (ODATA, RDF, ATOM, RDFa) published on the web. DevTask: Dev Task With MVC 4 and EF5DVB Viewer EPG Update Script: DVB Viewer EPG Update ScriptFcompress: Fcompress est un programme de compression simple, rapide et efficace de texte (mots). Programme conçu pour Eurêmaths, groupe Europole.Fdecompress: Fdecompress sert à décomprimer tout mot ou texte comprimer avec Fcompress.Hiren's Boot CD Program Launcher (Unicode support): Hiren's Boot CD Program Launcher (Unicode support)Image Gallery for WPF: A simple image gallery controlIMDb API: C# Class for grabbing data from the IMDb website.License Migration Live@edu Exchange to Office 365 A2: Only a little help to change the licenses after migration from Live@edu to Office 365MvcMSFootballManager: ASP MVC3 self-trainingNage: .NET Agent-based evolution frameworkOE NIK szakirány féléves: Féléves feladat.Office365 Helper: Office365 Helper is a collection of classes and methods to assist in administering and developing solutions for SharePoint online.Powershell Depo: This a collection of Powershell scripts that are collected and modified from anywhere and everywhere. Feel free to download whatever you need.Proejct13251109: sssProject13271109: sdfgdRemotableViewModel: The RemotableViewModel (RVM) library allows sharing of ViewModels accross process boundaries following the Model-View-ViewModel (MVVM) pattern.ScriptEngine: ScriptEngine for C#, JavaScript, VB. The ScriptEngine enables perform script snippets/files. In addition, the scripts can be cached.Shibboleth IdP Loader for ADFS: A PowerShell script to load Shibboleth federation IdPs as ADFS trusted claims providerSparklings: Game in which you change sparklings abilities using spots of light to help them reach the exit.Stala - Auto-update essentials: C# library and examples for event driven auto-updates without predefined graphical components/windows. Works with MSI packages and installer over web/intranet.STeaL : stealed functionarities from STL: STeaL is a .NET class library implemented with C# and enables you to use STL functionarities. especially, various of IList<T> extensions included.TechQueue: A series of Projects and code which are described in the blog http://techqueue.wordpress.comtesttom11082012git03: fsatesttom11082012tfs02: dsToyunda Phone: A simple searchable list of the songs available at the Karaoke rooms in the Epitanime event ran by students at the school named Epita in Paris.WebSearch.Net: WebSearch.Net is an open-source research platform that provides uniform data source access, data modeling, feature calculation, data mining, etc. Win8Controls: This project contains XAML based calendar control for WinRT / Windows Store Style application.WindowsCommonStorage: windows common storage

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Difference between SQL 2005 and SQL 2008 for inserting multiple rows with XML

    - by Sam Dahan
    I am using the following SQL code for inserting multiple rows of data in a table. The data is passed to the stored procedure using an XML variable : INSERT INTO MyTable SELECT SampleTime = T.Item.value('SampleTime[1]', 'datetime'), Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/MyRecord') T(item) I have a whole bunch of unit tests to verify that I am inserting the right information, the right number of records, etc.. when I call the stored procedure. All fine and dandy - that is, until we began to monkey around with the compatibility level of the database. The code above worked beautifully as long as we kept the compatibility level of the DB at 90 (SQL 2005). When we set the compatibility level at 100 (SQL 2008), the unit tests failed, because the stored procedure using the code above times out. The unit tests are dropping the database, re-creating it from scripts, and running the tests on the brand new DB, so it's not - I think - a question of the 'old compatibility level' sticking around. Using the SQL Management studio, I made up a quick test SQL script. Using the same XML chunk, I alter the DB compat level , truncate the table, then use the code above to insert 650 rows. When the level is 90 (SQL 2005), it runs in milliseconds. When the level is 100 (SQL 2008) it sometimes takes over a minute, sometimes runs in milliseconds. I'd appreciate any insight anyone might have into that. EDIT The script takes over a minute to run with my actual data, which has more rows than I show here, is a real table, and has an index. With the following example code, the difference goes between milliseconds and around 5 seconds. --use [master] --ALTER DATABASE MyDB SET compatibility_level =100 use [MyDB] declare @xml xml set @xml = '<?xml version="1.0"?> <Root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Record> <SampleTime>2009-01-24T00:00:00</SampleTime> <Volume1>0</Volume1> <Volume2>0</Volume2> </Record> ..... 653 records, sample time spaced out 4 hours ........ </Root>' DECLARE @myTable TABLE( ID int IDENTITY(1,1) NOT NULL, [SampleTime] [datetime] NOT NULL, [Volume1] [float] NULL, [Volume2] [float] NULL) INSERT INTO @myTable select T.Item.value('SampleTime[1]', 'datetime') as SampleTime, Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/Record') T(item) I uncomment the 2 lines at the top, select them and run just that (the ALTER DATABASE statement), then comment the 2 lines, deselect any text and run the whole thing. When I change from 90 to 100, it runs all the time in 5 seconds (I change the level once, but I run the series several times to see if I have consistent results). When I change from 100 to 90, it runs in milliseconds all the time. Just so you can play with it too. I am using SQL Server 2008 R2 standard edition.

    Read the article

  • In Exim, is RBL spam rejected prior to being scanned by SpamAssassin?

    - by user955664
    I've recently been battling spam issues on our mail server. One account in particular was getting hammered with incoming spam. SpamAssassin's memory use is one of our concerns. What I've done is enable RBLs in Exim. I now see many rejection notices in the Exim log based on the various RBLs, which is good. However, when I run Eximstats, the numbers seem to be the same as they were prior to the enabling of the RBLs. I am assuming because the email is still logged in some way prior to the rejection. Is that what's happening, or am I missing something else? Does anyone know if these emails are rejected prior to being processed by SpamAssassin? Or does anyone know how I'd be able to find out? Is there a standard way to generate SpamAssassin stats, similar to Eximstats, so that I could compare the numbers? Thank you for your time and any advice. Edit: Here is the ACL section of my Exim configuration file ###################################################################### # ACLs # ###################################################################### begin acl # ACL that is used after the RCPT command check_recipient: # to block certain wellknown exploits, Deny for local domains if # local parts begin with a dot or contain @ % ! / | deny domains = +local_domains local_parts = ^[.] : ^.*[@%!/|] # to restrict port 587 to authenticated users only # see also daemon_smtp_ports above accept hosts = +auth_relay_hosts condition = ${if eq {$interface_port}{587} {yes}{no}} endpass message = relay not permitted, authentication required authenticated = * # allow local users to send outgoing messages using slashes # and vertical bars in their local parts. # Block outgoing local parts that begin with a dot, slash, or vertical # bar but allows them within the local part. # The sequence \..\ is barred. The usage of @ % and ! is barred as # before. The motivation is to prevent your users (or their virii) # from mounting certain kinds of attacks on remote sites. deny domains = !+local_domains local_parts = ^[./|] : ^.*[@%!] : ^.*/\\.\\./ # local source whitelist # accept if the source is local SMTP (i.e. not over TCP/IP). # Test for this by testing for an empty sending host field. accept hosts = : # sender domains whitelist # accept if sender domain is in whitelist accept sender_domains = +whitelist_domains # sender hosts whitelist # accept if sender host is in whitelist accept hosts = +whitelist_hosts accept hosts = +whitelist_hosts_ip # envelope senders whitelist # accept if envelope sender is in whitelist accept senders = +whitelist_senders # accept mail to postmaster in any local domain, regardless of source accept local_parts = postmaster domains = +local_domains # accept mail to abuse in any local domain, regardless of source accept local_parts = abuse domains = +local_domains # accept mail to hostmaster in any local domain, regardless of source accept local_parts = hostmaster domains =+local_domains # OPTIONAL MODIFICATIONS: # If the page you're using to notify senders of blocked email of how # to get their address unblocked will use a web form to send you email so # you'll know to unblock those senders, then you may leave these lines # commented out. However, if you'll be telling your senders of blocked # email to send an email to [email protected], then you should # replace "errors" with the left side of the email address you'll be # using, and "example.com" with the right side of the email address and # then uncomment the second two lines, leaving the first one commented. # Doing this will mean anyone can send email to this specific address, # even if they're at a blocked domain, and even if your domain is using # blocklists. # accept mail to [email protected], regardless of source # accept local_parts = errors # domains = example.com # deny so-called "legal" spammers" deny message = Email blocked by LBL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains sender_domains = +blacklist_domains # deny using hostname in bad_sender_hosts blacklist deny message = Email blocked by BSHL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains hosts = +bad_sender_hosts # deny using IP in bad_sender_hosts blacklist deny message = Email blocked by BSHL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains hosts = +bad_sender_hosts_ip # deny using email address in blacklist_senders deny message = Email blocked by BSAL - to unblock see http://www.example.com/ domains = +use_rbl_domains senders = +blacklist_senders # By default we do NOT require sender verification. # Sender verification denies unless sender address can be verified: # If you want to require sender verification, i.e., that the sending # address is routable and mail can be delivered to it, then # uncomment the next line. If you do not want to require sender # verification, leave the line commented out #require verify = sender # deny using .spamhaus deny message = Email blocked by SPAMHAUS - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains dnslists = sbl.spamhaus.org # deny using ordb # deny message = Email blocked by ORDB - to unblock see http://www.example.com/ # # only for domains that do want to be tested against RBLs # domains = +use_rbl_domains # dnslists = relays.ordb.org # deny using sorbs smtp list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains dnslists = dnsbl.sorbs.net=127.0.0.5 # Next deny stuff from more "fuzzy" blacklists # but do bypass all checking for whitelisted host names # and for authenticated users # deny using spamcop deny message = Email blocked by SPAMCOP - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = bl.spamcop.net # deny using njabl deny message = Email blocked by NJABL - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = dnsbl.njabl.org # deny using cbl deny message = Email blocked by CBL - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = cbl.abuseat.org # deny using all other sorbs ip-based blocklist besides smtp list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = dnsbl.sorbs.net!=127.0.0.6 # deny using sorbs name based list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ domains =+use_rbl_domains # rhsbl list is name based dnslists = rhsbl.sorbs.net/$sender_address_domain # accept if address is in a local domain as long as recipient can be verified accept domains = +local_domains endpass message = "Unknown User" verify = recipient # accept if address is in a domain for which we relay as long as recipient # can be verified accept domains = +relay_domains endpass verify=recipient # accept if message comes for a host for which we are an outgoing relay # recipient verification is omitted because many MUA clients don't cope # well with SMTP error responses. If you are actually relaying from MTAs # then you should probably add recipient verify here accept hosts = +relay_hosts accept hosts = +auth_relay_hosts endpass message = authentication required authenticated = * deny message = relay not permitted # default at end of acl causes a "deny", but line below will give # an explicit error message: deny message = relay not permitted # ACL that is used after the DATA command check_message: accept

    Read the article

  • Button click does not start Service in Android App Widget

    - by Feanor
    I'm having trouble starting a Service to update an AppWidget that I'm creating as an exercise. I'm trying to get the latitude and longitude of spoofed location data from DDMS to display in the widget. The widget uses a service to update the TextView, which may be slightly overkill, but I wanted to follow the template that seems to be common in AppWidgets that do more work (like the Forecast widget or the Wiktionary widget). Right now, I'm not getting any error messages or strange behavior; nothing at all happens when the button is pressed. I'm a bit mystified as to what might be wrong. Could anyone out there point me in the right direction? Additionally, if my logic for location is faulty, I'd love recommendations on that too. I've looked at several blogs, the Google examples, and the documentation, but I feel a little fuzzy on how it works. Here is the current state of the widget: public class Widget extends AppWidgetProvider { static final String TAG = "Widget"; /** * {@inheritDoc} */ public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { // Create an intent to launch the service Intent serviceIntent = new Intent(context, UpdateService.class); // PendingIntent is required for the onClickPendingIntent that actually // starts the service from a button click PendingIntent pendingServiceIntent = PendingIntent.getService(context, 0, serviceIntent, 0); // Get the layout for the App Widget and attach a click listener to the // button RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.main); views.setOnClickPendingIntent(R.id.address_button, pendingServiceIntent); super.onUpdate(context, appWidgetManager, appWidgetIds); } // To prevent any ANR timeouts, we perform the update in a service; // really should have its own thread too public static class UpdateService extends Service { static final String TAG = "UpdateService"; private LocationManager locationManager; private Location currentLocation; private double latitude; private double longitude; public void onStart(Intent intent, int startId) { // Get a LocationManager from the system services locationManager = (LocationManager) getSystemService(Context.LOCATION_SERVICE); // Register for updates from spoofed GPS locationManager.requestLocationUpdates("gps", 30000L, 0.0f, new LocationListener() { @Override public void onLocationChanged(Location location) { currentLocation = location; } @Override public void onProviderDisabled(String provider) {} @Override public void onProviderEnabled(String provider) {} @Override public void onStatusChanged(String provider, int status, Bundle extras) {} }); // Get the last known location from GPS currentLocation = locationManager.getLastKnownLocation("gps"); // Build the widget update RemoteViews updateViews = buildUpdate(this); // Push update for this widget to the home screen ComponentName thisWidget = new ComponentName(this, Widget.class); // AppWidgetManager updates AppWidget state; gets information about // installed AppWidget providers and other AppWidget related state AppWidgetManager manager = AppWidgetManager.getInstance(this); // Updates the views based on the RemoteView returned from the // buildUpdate method (stored in updateViews) manager.updateAppWidget(thisWidget, updateViews); } public RemoteViews buildUpdate(Context context) { latitude = currentLocation.getLatitude(); longitude = currentLocation.getLongitude(); RemoteViews updateViews = new RemoteViews(context.getPackageName(), R.layout.main); updateViews.setTextViewText(R.id.latitude_text, "" + latitude); updateViews.setTextViewText(R.id.longitude_text, "" + longitude); return updateViews; } @Override public IBinder onBind(Intent intent) { // We don't need to bind to this service return null; } } }

    Read the article

  • FMOD.net streaming, callback and exinfo parameters

    - by Tesserex
    I posted a question on gamedev about how to play nsf files (NES console music) in FMOD. It didn't get any results, but since then I made some progress. I decided that the easiest method was just to compile an existing player into a dll and then call it from C# to populate my buffer. The problem now is getting it to sound right, and making sure all my paremeters are correct. Here are the facts so far: The nsf dll is dealing with shorts, so the data is PCM16. The sample nsf I'm using has a playback rate of 60 Hz. Just for playing around now, I'm using a frequency of 48000. Based on 2 and 3, the dll calculates a necessary buffer size of 48000 / 60hz = 800. This means it will render 800 shorts worth of buffer for every simulated NES frame. I've so far got my C# code to play the nsf, at the correct pitch and tempo, but it's very grainy / fuzzy, which I'm attributing to the fact that the FMOD read callback is giving a data length of 1600, whereas I should be expecting 800. I've tried playing around with all the numbers and it either crashes, or the music changes pitch, tempo, or both. Here's some of my C# code: uint channels = 1, frequency = 48000; FMOD.MODE mode = (FMOD.MODE.DEFAULT | FMOD.MODE.OPENUSER | FMOD.MODE.LOOP_NORMAL); FMOD.Sound sound = new FMOD.Sound(); FMOD.CREATESOUNDEXINFO ex = new FMOD.CREATESOUNDEXINFO(); ex.cbsize = Marshal.SizeOf(ex); ex.fileoffset = 0; ex.format = FMOD.SOUND_FORMAT.PCM16; // does this even matter? It doesn't change my results as long as it's long enough for one update ex.length = frequency; ex.numchannels = (int)channels; ex.defaultfrequency = (int)frequency; ex.pcmreadcallback = pcmreadcallback; ex.dlsname = null; // eventually I will calculate this with frequency / nsf hz, but I'm just testing for now ex.decodebuffersize = 800; // from the dll load_nsf_file("file.nsf", 8, (int)frequency); // 8 is the track number to play var result = system.createSound( (string)null, (mode | FMOD.MODE.CREATESTREAM), ref ex, ref sound); channel = new FMOD.Channel(); result = system.playSound(FMOD.CHANNELINDEX.FREE, sound, false, ref channel); private FMOD.RESULT PCMREADCALLBACK(IntPtr soundraw, IntPtr data, uint datalen) { // from the dll process_buffer(data, (int)800); // if I use datalen, it usually crashes (I can't get datalen to = 800 safely) return FMOD.RESULT.OK; } So here are some of my questions: What is the relationship between exinfo.decodebuffersize, frequency, and the datalen parameter of the read callback? With this code sample, it's coming in as 3200. I don't know where that factor of 4 between it and the decodebuffersize comes from. Is datalen in the callback referring to number of bytes, or shorts? The process_buffer function takes a short array and its length. I would expect fmod is talking about shorts as well because I told it PCM16. Maybe my playback quality is bad for some totally different reason. If so I have no idea where to begin solving that. Any ideas there?

    Read the article

  • Phone-book Database Help - Python

    - by IDOntWantThat
    I'm new to programming and have an assignment I've been working at for awhile. I understand defining functions and a lot of the basics but I'm kind of running into a brick wall at this point. I'm trying to figure this one out and don't really understand how the 'class' feature works yet. I'd appreciate any help with this one; also any help with some python resources that have can dummy down how/why classes are used. You've been going to work on a database project at work for sometime now. Your boss encourages you to program the database in Python. You disagree, arguing that Python is not a database language but your boss persists by providing the source code below for a sample telephone database. He asks you to do two things: Evaluate the existing source code and extend it to make it useful for managers in the firm. (You do not need a GUI interface, just work on the database aspects: data entry and retrieval - of course you must get the program to run or properly work He wants you to critically evaluate Python as a database tool. Import the sample code below into the Python IDLE and enhance it, run it and debug it. Add features to make this a more realistic database tool by providing for easy data entry and retrieval. import shelve import string UNKNOWN = 0 HOME = 1 WORK = 2 FAX = 3 CELL = 4 class phoneentry: def __init__(self, name = 'Unknown', number = 'Unknown', type = UNKNOWN): self.name = name self.number = number self.type = type # create string representation def __repr__(self): return('%s:%d' % ( self.name, self.type )) # fuzzy compare or two items def __cmp__(self, that): this = string.lower(str(self)) that = string.lower(that) if string.find(this, that) >= 0: return(0) return(cmp(this, that)) def showtype(self): if self.type == UNKNOWN: return('Unknown') if self.type == HOME: return('Home') if self.type == WORK: return('Work') if self.type == FAX: return('Fax') if self.type == CELL: return('Cellular') class phonedb: def __init__(self, dbname = 'phonedata'): self.dbname = dbname; self.shelve = shelve.open(self.dbname); def __del__(self): self.shelve.close() self.shelve = None def add(self, name, number, type = HOME): e = phoneentry(name, number, type) self.shelve[str(e)] = e def lookup(self, string): list = [] for key in self.shelve.keys(): e = self.shelve[key] if cmp(e, string) == 0: list.append(e) return(list) # if not being loaded as a module, run a small test if __name__ == '__main__': foo = phonedb() foo.add('Sean Reifschneider', '970-555-1111', HOME) foo.add('Sean Reifschneider', '970-555-2222', CELL) foo.add('Evelyn Mitchell', '970-555-1111', HOME) print 'First lookup:' for entry in foo.lookup('reifsch'): print '%-40s %s (%s)' % ( entry.name, entry.number, entry.showtype() ) print print 'Second lookup:' for entry in foo.lookup('e'): print '%-40s %s (%s)' % ( entry.name, entry.number, entry.showtype() ) I'm not sure if I'm on the right track but here is what I have so far: def openPB(): foo = phonedb() print 'Please select an option:' print '1 - Lookup' print '2 - Add' print '3 - Delete' print '4 - Quit' entry=int(raw_input('>> ')) if entry==1: namelookup=raw_input('Please enter a name: ') for entry in foo.lookup(namelookup): print '%-40s %s (%s)' % (entry.name, entry.number, entry.showtype() ) elif entry==2: name=raw_input('Name: ') number=raw_input('Number: ') showtype=input('Type (UNKNOWN, HOME, WORK, FAX, CELL): \n>> ') for entry in foo.add(name, number, showtype): #Trying to figure out this part print '%-40s %s (%s)'% (entry.name, entry.number, entry.showtype() ) elif entry==3: delname=raw_input('Please enter a name to delete: ') # #Trying to figure out this part print "Contact '%s' has been deleted" (delname) elif entry==4: print "Phone book is now closed" quit else: print "Your entry was not recognized." openPB() openPB()

    Read the article

< Previous Page | 28 29 30 31 32 33 34  | Next Page >