Search Results

Search found 13947 results on 558 pages for 'television shows'.

Page 380/558 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Lookup Viewer

    - by Geertjan
    The Maven integrated view that I showed yesterday I was able to create because I happened to know that an implementation of SubprojectProvider and LogicalViewProvider are in the Lookup of Maven projects. With that knowledge, I was able to use and even delegate to those implementations. But what if you don't know that those implementations are in the Lookup of the Project object? In the case of the Maven Project implementation, you could look in the source code of the Maven Project implementation, at the "getLookup" method. However, any other module could be putting its own objects into that Lookup, dynamically, i.e., at runtime. So there's no way of knowing what's in the Lookup of any Project object or any other object with a Lookup. But now imagine that you have a Lookup Viewer, as a tool during development, which you would exclude when distributing the application. Whenever new objects are found in the Lookup, the viewer displays them. You could install the Lookup Viewer into NetBeans IDE, or any other NetBeans Platform application, and then get a quick impression of what's actually in the Lookup when you select a different item in the application during development. Here it is (though I vaguely remember someone else writing something similar): Above, a Maven Project is selected. The Lookup Window shows that, among many other classes, an implementation of SubprojectProvider and LogicalViewProvider are found in the Lookup when the Maven Project is selected. If an item in the Lookup Window has its own Lookup, the content of that Lookup is displayed as child nodes of the Lookup, etc, i.e., you can explore all the way down the Lookup of each item found within objects found within the current selection. (What's especially fun is seeing the SaveCookieImpl being added and removed from the Lookup Window when you make/save a change in a document.) Another example is below, showing the Lookup Window installed in a custom application created during a course at MIT in Boston: A small trick I had to apply is that I always show the previous Lookup, since the current Lookup, when you select one of the Nodes in the Lookup Window, would be the Lookup of the Lookup Window itself! If anyone is interested in this, I can publish the NetBeans module providing the above window to the NetBeans update center. 

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • Cron job running successfully suddenly reports script is not found

    - by Ted B
    What might cause cron to suddenly report a file it is supposed to run is "not found," when the file hasn't been touched, and in fact, the entire system hasn't been touched since it last ran successfully? I have a cronjob schedule I define by doing sudo crontab -e In it, I have dozens of cron jobs that run successfully.. I do not have a PATH specified, and I use absolute paths to call all my scheduled scripts, setting the PATH in them as needed. I do not specify a SHELL in the crontab. All scripts identify the shell as their first line. Without me touching the system, a particular job defined in the middle of other jobs will suddenly stop running. To debug this, I added an output redirection to a log file. In that, the output clearly shows the output of the script successfully running time after time for weeks, and then suddenly the following appears: /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found If I do the ls command, copying and pasting that exact path from the error message, it clearly reports the file is still there (no surprise). Yet the log continues to report that file is "not found" until I take action. I can run the script manually and it runs just fine. If I do sudo crontab -e and save the file, the job runs on the next scheduled time, putting its output in the log, no longer reporting the script is "not found". It seems to me the contents of the script trying to be run are irrelevant since cron doesn't even process the file because it is "not found". I have a job scheduled below the one that is encountering this problem that I know is continuing to run, because I have its output mailed to me. So I know cron is running and continues to run at least one other job, even after it suddenly reports this job's script is "not found". All my lines end with a newline. I had no periods in the crontab until I added the redirection to a log file. I have now added a PATH specification, but left the absolute paths in the jobs. Unfortunately, I have no idea if and when this problem will occur. It will likely be weeks from now. By the way, I am running a script to syncronize the clock, and I see the time is exactly what it should be.

    Read the article

  • Building SANE from git-source produce backend missmatch on 12.04 even if built locally

    - by deinonychusaur
    It seems to me that with Ubuntu Precise Pangolin it is all but easy to do a proper install of SANE from source (git-repo). I've found other scanning issues trying to find an answer to this, where the output people posted seems to indicate they suffer the same issue (unknowingly). If I run on a fresh install of Ubuntu 12.04 with compiled SANE source from the git I get: $ scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.22 (I basically followed the instructions on http://ubuntuportal.com/2012/02/how-to-get-an-canon-canoscan-lide-100-scanner-to-work-in-ubuntu-11-10linux-mint-12.html since I didn't find any other information making sure that sane was not installed prior to installation.) My primary interest is the epson2-backend. In 1.0.22 it offers the wrong TPU settings for Epson V700 (TPU2-mode wasn't supported in 1.0.22, and the scanner is useless to me if I don't have the TPU2-support). Since if I ask it to enter transparency mode, it shows 1.0.22 behaviour, it implies that the epson2-backend comes from 1.0.22 and not 1.0.24 even though I just built it. If I install SANE with prefix to a local folder and run that version of scanimage it still produces the mismatch. However, on another computer where I installed a custom 1.0.22 build of SANE prior to upgrading to Ubuntu 12.04, I can build and install the same SANE-git locally and have it correctly match backends: $ ./SANE/bin/scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.24 $ scanimage -V scanimage (sane-backends) 1.0.22; backend version 1.0.22 On this computer the 1.0.24 works correctly in finding TPU2 on Epson V700. So what am I missing/doing wrong? (And I want to replace 1.0.22 with 1.0.24 for the whole system, the local build was just debugging.) Any help would be much appreciated. Edit 1: Just tried compiling SANE using this instruction on Ubuntu 10.04 and it worked like a charm. However, when I upgraded to 12.04 (really would like to run 12.04), SANE was downgraded to 1.0.22. When trying the same set of instructions on 12.04 I was still out of luck -- the backend missmatch was there again (and I do have libusb-dev installed) Edit 2: I updated to Ubuntu 12.10 which now has the 1.0.23 SANE drivers. I haven't dared trying to compile from source on 12.10 since 1.0.23 is good enough for me. This is just a work-around and I would still like to know what's up with Ubuntu 12.04.

    Read the article

  • DTLoggedExec 1.1.2008.4 Released!

    - by Davide Mauri
    Today I've relased the latest version of my DTExec replacement tool, DTLoggedExec. The main changes are the following: Used a new strategy for version numbers. Now it will follow the following pattern Major.Minor.TargetSQLServerVersion.Revision Added support for Auto Configurations Fixed a bug that reported incorrect number of errors and warnings to Log Providers Fixed a buf that prevented correct casting of values when using /Set and /Param options Errors and Warnings are now counted more precisely. Updated database and log import scripts to categorize logs by projects and sections. E.g.: Project: MyBIProject; Sections: Staging, Datawarehouse Removed unused report stored procedures from database Updated Samples: 12 samples are now available to show ALL DTLoggedExec features From this version only SSIS 2008 will be supported http://dtloggedexec.codeplex.com/releases/view/62218  It useful to say something more on a couple of specific points: From this version only SSIS 2008 will be supportedYes, Integration Services 2005 are not supported anymore. The latest version capable of running SSIS 2005 Packages is the 1.0.0.2. Updated database and log import scripts to categorize logs by projects and sectionsWhen you import a log file, you can now assign it to a Project and to a Section of that project. In this way it's easier to gather statistical information for an entire project or a subsection of it. This also allows to store logged data of package belonging to different projects in the same database. For example:  Updated SamplesA complete set of samples that shows how to use all DTLoggedExec features are now shipped with the product. Enjoy! Added support for Auto ConfigurationsThis point will have a post on its own, since it's quite important and is by far the biggest new feature introduced in this release. To explain it in a few words, I can just say that you don't need to waste time with complex DTS configuration files or options, since a package will configure itself automatically. You just need to write a single statement as a parameter for DTLoggedExec. This feature can simplify deployment *a lot* :)   I the next days I'll write the mentioned post on Auto-Configurations and i'll update the documentation available on theDTLoggedExec website:   http://dtloggedexec.davidemauri.it/MainPage.ashx

    Read the article

  • How can I set up Friendly URL to Nginx?

    - by MKK
    I'm trying to use dokuwiki with its Friendly URL on Nginx. The problem that I'm facing is, it doesn' show correct path to any link(even stylesheet, and images) on every page It looks that paths are missing wiki/ part. If I click on the image and show its destination, it shows this url http://foo-sample.com/lib/tpl/dokuwiki/images/logo.png But it has to be this below. http://foo-sample.com/wiki/lib/tpl/dokuwiki/images/logo.png and login URL is not working either. If I click on login link, it takes me to http://foo-sample.com/wiki/start?do=login&sectok=ff7d4a68936033ed398a8b82ac9 and it says 404 Not Found I took a look at this https://www.dokuwiki.org/rewrite#nginx and tried as much as possible. However it still doesn't work. Here's my conf files. How can I fix this problem? dokuwiki is set in /usr/share/wiki /etc/nginx/conf.d/rails.conf upstream sample { ip_hash; server unix:/var/run/unicorn/unicorn_foo-sample.sock fail_timeout=0; } server { listen 80; server_name foo-sample.com; root /var/www/html/foo-sample/public; location /wiki { alias /usr/share/wiki; index doku.php; } location ~ ^/wiki.+\.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index doku.php; fastcgi_split_path_info ^/wiki(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /usr/share/wiki$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } /usr/share/wiki/.htaccess ## Enable this to restrict editing to logged in users only ## You should disable Indexes and MultiViews either here or in the ## global config. Symlinks maybe needed for URL rewriting. #Options -Indexes -MultiViews +FollowSymLinks ## make sure nobody gets the htaccess files <Files ~ "^[\._]ht"> Order allow,deny Deny from all Satisfy All </Files> # Uncomment these rules if you want to have nice URLs using # $conf['userewrite'] = 1 - not needed for rewrite mode 2 # Not all installations will require the following line. If you do, # change "/dokuwiki" to the path to your dokuwiki directory relative # to your document root. # If you enable DokuWikis XML-RPC interface, you should consider to # restrict access to it over HTTPS only! Uncomment the following two # rules if your server setup allows HTTPS. RewriteCond %{HTTPS} !=on RewriteRule ^lib/exe/xmlrpc.php$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] <IfModule mod_geoip.c> GeoIPEnable On Order deny,allow deny from all SetEnvIf GEOIP_COUNTRY_CODE JP AllowCountry Allow from .googlebot.com Allow from .yahoo.net Allow from .msn.com Allow from env=AllowCountry </IfModule>

    Read the article

  • Sync Google Calendar with SharePoint Calendar

    - by dataintegration
    The ADO.NET Providers for Google and SharePoint make it easy to retrieve and update data in both Google's web services and SharePoint. This article shows how the SQL interface to data makes it easy to build applications that need to move data from one source to another. The application described here is a demo Windows application that synchronizes calendar events between Google and SharePoint, but the RSSBus Providers can be used to achieve integrations on both the .NET and the Java platforms, including more sophisticated features like full automation. Getting the Events Step 1: Google accounts can have several calendars. Obtain a list of a user's Google Calendars by issuing a query to the Calendars table. For example: SELECT * FROM Calendars. Step 2: In order to get a list of the events from a given Google Calendar, issue a query to the CalendarEvents table while specifying the CalendarId from the Calendars table. The resulting events can be further filtered by using the StartDateTime or EndDateTime columns. For example: SELECT * FROM CalendarEvents WHERE (CalendarId = '[email protected]') AND (StartDateTime >= '1/1/2012') AND (StartDateTime <= '2/1/2012') Step 3: SharePoint stores data in Lists. There are various types of lists, e.g., document lists and calendar lists. A SharePoint account can have several lists of the same type. To find all the calendar lists in SharePoint, use the ListLists stored procedure and inspect the BaseTemplate column. Step 4: The SharePoint data provider models each SharPoint list as a table. Get the events in a particular calendar by querying the table with the same name as the list. The events may be filtered further by specifying the EventDate or EndDate columns. For example: SELECT * FROM Calendar WHERE (EventDate >= '1/1/2012') AND (EventDate <= '2/1/2012') Synchronizing the Events Synchronizing the events is a simple process. Once the events from Google and SharePoint are available they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete events as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for SharePoint V2, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the SharePoint ADO.NET Data Provider V2, which can be obtained here.

    Read the article

  • Nvidia GT218 repository drivers don't work

    - by user1042840
    I upgraded all packages with sudo apt-get upgrade command on my Ubuntu 10.04 box and I have Ubuntu 12.04 3.2.0-29-generic-pae now. I have two monitors and the following GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [NVS 300] (rev a2) After upgrading to 12.04, I somehow lost my previous setup with one common workspace stretched across two monitors. When Ubuntu starts only one monitor is on. I can see the message on the active monitor: Not optimum mode. Recommended mode: 1680x1050 60Hz I used Nvidia proprietary drivers on 10.04 but now jockey-text --list shows: xorg:nvidia_current - NVIDIA accelerated graphics driver (Proprietary, Disabled, Not in use) xorg:nvidia_current_updates - NVIDIA accelerated graphics driver (post-release updates) (Proprietary, Enabled, Not in use) When I run sudo nvidia-settings it says You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server.' I typed nvidia-xconfig and rebooted, but jockey-text --list says the same after the reboot: Not in use. The same with nvidia-current - Enabled but Not in use. I also tried nvidia-173 but I ended up in tty immediately at startup so I removed it. I used to have some problems with Nvidia proprietary drivers on 10.04, I had to put paths to EDID files in /etc/X11/xorg.conf explicitly, but the resolution was as recommended and both monitors were working. If I understand correctly, nouveau drivers are used now by default because the resolution is still quite high, definitely not 800x600, xrandr showed: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 66.0* 1280x1024 76.0 1024x768 76.0 800x600 73.0 640x480 73.0 640x400 0.0 320x400 0.0 1680x1050_60.00 (0x4f) 146.2MHz h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.3KHz v: height 1050 start 1053 end 1059 total 1089 clock 60.0Hz However, colors seem a bit faded and blurry with nouveau drivers. Mouse cursor is invisible if it's placed inside Firefox window, and only one monitor is working. I like open source and if it's possible I'd prefer to use nouveau drivers but a few things should be fixed. I'm curious why nvidia-current drivers from the repository don't work now. I read it has something to do with the new X11 server in Ubuntu 12.04, is it true? How can I get it back to work?

    Read the article

  • How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x?

    - by TorakTu
    How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x ? WHAT DID WORK Installed original AMD64 version of Ubuntu 12.04 ISO image. Burned DVD and installed which shown kernel 3.2.0-23 to begin with. Got 5.1 surround sound working. Got ATI ( Now its AMD ) video drivers installed for my Radeon HD R6870 Video card from AMD's website. fglrxinfo came up and reported as normal. THE PROBLEM Kernel 3.2.0.x kept locking up so I tried higher kernel versions. But ATI / AMD Drivers do not install on any kernel Above 3.2.0.x WHAT I HAVE TRIED I have gone over this tutorial many times ( https://help.ubuntu.com/community/BinaryDriverHowto/ATI ) and it doesn't work on ANY kernel except 3.2.0.x. The problems I am having here are that the ATI / AMD drivers working for the 12.04 Precise with kernel 3.2.0-23 and 24, But the computer kept locking up. Although all my games would work, the lock ups were random and were constant. So I looked all over the web for 3 days trying to find an answer and the lock up issue was said to just update the kernel. So I did. Have tried many kernels. All of them .. no lock ups. BUT the Restricted AMD drivers from the AMD website will not install. And none of the OpenSource AMD drivers have EVER installed no matter what Kernel or version I tried. EXAMPLE OUTPUT OF 3D TYPE OF ERRORS Javax.media.opengl.GLException: glXGetConfig failed: error code GLX_NO_EXTENSION at com.sun.opengl.impl.x11.X11GLDrawableFactory.glXGetConfig(X11GLDrawableFactory.java:651) at com.sun.opengl.impl.x11.X11GLDrawableFactory.xvi2GLCapabilities(X11GLDrawableFactory.java:350) at com.sun.opengl.impl.x11.X11GLDrawableFactory.chooseGraphicsConfiguration(X11GLDrawableFactory.java:174) at javax.media.opengl.GLCanvas.chooseGraphicsConfiguration(GLCanvas.java:520) at javax.media.opengl.GLCanvas.<init>(GLCanvas.java:131) at haven.HavenPanel.<init>(HavenPanel.java:68) at haven.HavenPanel.<init>(HavenPanel.java:78) at haven.MainFrame.<init>(MainFrame.java:182) at haven.MainFrame.main2(MainFrame.java:306) at haven.MainFrame.access$100(MainFrame.java:34) at haven.MainFrame$7.run(MainFrame.java:360) at java.lang.Thread.run(Thread.java:722) And of course this is what fglrxinfo shows : X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 EDIT : I forgot to mention that I DID look at this post over the last few days and it did not help.

    Read the article

  • Ubuntu 10.10 not recognizing external hard drive

    - by sr71
    Installed Ubuntu 10.10 on a hard drive by itself (no windows). All updates are up to date. Everything`works fine except when I plug in a 320 gig Toshiba external hard drive....it is not recognized. When I plug in an 8 gig flash drive it is recognized no problem. What do I mean by "not recognized"? I mean: how do you know it was not recognized. You can check the command "cat /proc/partitions" in a terminal with and without the hdd attached. If you see some difference, it's ok. If the difference is something like "sda1" (sd+letter+number) then you have partition on it, maybe it can't be handled in Ubuntu (no filesystem on it or so). If there is only "sda" in the difference for example (sd+letter, but no number after it) then the drive itself is detected, just no partition is created on it. Also you can check out the messages of the kernel, with the "dmesg" command in the terminal. If there is no disk/partition/anything in /proc/partitions, there can be an USB level problem, you can issue command "lsusb" as it was suggested before my answer. It's really matter of what do you mean about "not recognized". @Marco Ceppi @Oli @Pitto By "not recognized" I meant that when I plug in a 8 gig flash drive an icon immediately appears on the desktop imdicating that there is a flash drive plugged in and I can click on it and view the files on the flash drive. It also shows up on the "Nautilus" default file manager. When I plug in the 320 gig Toshiba external hard drive, I get no indication on the desktop or the file manager (thus "not recogized"). When I run the command "cat/proc/partitions/" I get an error message Could not open location 'file:///home/bob/cat/proc/partitions' with or without the external hard drive installed. With the "dmeg" command I get about 10 pages of info with no mention of disk/partition/anything in /proc/partitions. When I run "lsub" command I get the following: bob@bob-desktop:~$ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 04b3:3003 IBM Corp. Rapid Access III Keyboard Bus 003 Device 002: ID 04b3:3004 IBM Corp. Media Access Pro Keyboard Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub bob@bob-desktop:~$

    Read the article

  • How do I set image position in conky

    - by realitygenerator
    I copied and modified an existing .conkyrc file from the ubuntu forum and I'm trying to place the LinuxMint logo in a specific position Below are my conkyrc file and the screenshot # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type desktop own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont URW Gothic:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left #alignment top_right #alignment bottom_left #alignment bottom_right. alignment top_middle # Gap between borders of screen and text gap_x 10 gap_y 10 #Display temp in fahrenheit temperature_unit fahrenheit #Choose which screen on which to display # stuff after 'TEXT' will be formatted on screen TEXT $color ${color green}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine LinuxMint 11 "Katya" (Oneric) ${image ~/Conky/Logo_Linux_Mint.png -s 80x60 -f 86400} ${color green}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${hwmon temp 1} $cpubar ${cpugraph 000000 ffffff} NAME PID CPU% MEM% ${top name 1} ${top pid 1} ${top cpu 1} ${top mem 1} ${top name 2} ${top pid 2} ${top cpu 2} ${top mem 2} ${top name 3} ${top pid 3} ${top cpu 3} ${top mem 3} ${top name 4} ${top pid 4} ${top cpu 4} ${top mem 4} ${color green}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Root: ${fs_free_perc /}% ${fs_bar 6 /}$color hda1: ${fs_free_perc /media/sda1}% ${fs_bar 6 /media/sda1}$color ${color green}NETWORK (${addr eth1}) ${hr 2}$color Down: $color${downspeed eth1} k/s ${alignr}Up: ${upspeed eth1} k/s ${downspeedgraph eth1 25,140 000000 ff0000} ${alignr}${upspeedgraph eth1 25,140 000000 00ff00}$color Total: ${totaldown eth1} ${alignr}Total: ${totalup eth1} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color green}LOGGING ${hr 2}$color ${execi 30 tail -n3 /var/log/messages | awk '{print " ",$5,$6,$7,$8,$9,$10}' | fold -w50} ${color green}FORTUNE ${hr 2}$color ${execi 120 fortune -s | fold -w50} I want to put the mint logo right after the word (oneric). Any help would be greatly appreciated.

    Read the article

  • More on Map Testing

    - by Michael Stephenson
    I have been chatting with Maurice den Heijer recently about his codeplex project for the BizTalk Map Testing Framework (http://mtf.codeplex.com/). Some of you may remember the article I did for BizTalk 2009 and 2006 about how to test maps but with Maurice's project he is effectively looking at how to improve productivity and quality by building some useful testing features within the framework to simplify the process of testing maps. As part of our discussion we realized that we both had slightly different approaches to how we validate the output from the map. Put simple Maurice does some xpath validation of the data in various nodes where as my approach for most standard cases is to use serialization to allow you to validate the output using normal MSTest assertions. I'm not really going to go into the pro's and con's of each approach because I think there is a place for both and also I'm sure others have various approaches which work too. What would be great is for the map testing framework to provide support for different ways of testing which can cover everything from simple cases to some very specialized scenarios. So as agreed with Maurice I have done the sample which I will talk about in the rest of this article to show how we can use the serialization approach to create and compare the input and output from a map in normal development testing. Prerequisites One of the common patterns I usually implement when developing BizTalk solutions is to use xsd.exe to create .net classes for most of the schemas used within the solution. In the testing pattern I will take advantage of these .net classes. The Map In this sample the map we will use is very simple and just concatenates some data from the input message to the output message. Hopefully the below picture illustrates this well. The Test In the test I'm basically taking the following actions: Use the .net class generated from the schema to create an input message for the map Serialize the input object to a file Run the map from .net using the standard BizTalk test method which was generated for running the map Deserialize the output file from the map execution to a .net class representing the output schema Use MsTest assertions to validate things about the output message The below picture shows this: As you can see the code for this is pretty simple and it's all strongly typed which means changes to my schema which can affect the tests can be easily picked up as compilation errors. I can then chose to have one test which validates most of the output from the map, or to have many specific tests covering individual scenarios within the map. Summary Hopefully this post illustrates a powerful yet simple way of effectively testing many BizTalk mapping scenarios. I will probably have more conversations with Maurice about these approaches and perhaps some of the above will be included in the mapping test framework.   The sample can be downloaded from here: http://cid-983a58358c675769.office.live.com/self.aspx/Blog%20Samples/More%20Map%20Testing/MapTestSample.zip

    Read the article

  • Uniform not being applied to proper mesh

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • Feedback from SQLBits 8

    - by Peter Larsson
    This years SQLBits occurred in Brighton. Although I didn’t have the opportunity to attend the full conference, I did a presentation at Saturday. Getting to Brighton was easy. Drove to Copenhagen airport at 0415, flew 0605 and arrived at Gatwick 0735. Then I took the direct train to Brighton and showed up at 0830, just one hour before presenting. This was the easy part. Getting home was much worse. Presentation ended at 1030 and I had to rush to the train station to get back to London, change to tube for Heathrow. Made it at the gate just 15 seconds before closing. That included a half mile run in the airport… Anyway, yesterday I got the feedback for my presentation. It does look good, especially since English is not my first language. This is the first graph Seems to be just halfway between conference average and best session. I can live with that. Second graph shows more detail about attendees voting. It also look acceptable. A wider spread for the 9’s, but it is an inevitable effect from how attendees percept the session. I did get a lot of 8’s and the lower grades in an descending order. The two people voting 4 and 5 didn’t say why they voted this so I don’t know how to remedy this. Third graph is about each category of votes.   Again, I find this acceptable. The Session abstract and Speaker’s knowledge seems to follow attendees expectations compared to conference average. I seem to have met the attendees expectations (and some more) for the other four categories, also compared to conference average. Since this did encourage me, I believe I will present some more at future meetings. I do have a new presentation about something all developers are doing every day but they may not know it. I will also cover this new topic in the next Deep Dives II book. Stay tuned! //Peter

    Read the article

  • PL/SQL to delete invalid data from token Strings

    - by Jie Chen
    Previous article describes how to delete the duplicated values from token string in bulk mode. This one extends it and shows the way to delete invalid data. Scenario Support we have page_two and manufacturers tables in database and the table DDL is: SQL> desc page_two; Name NULL? TYPE ----------------------------------------- -------- ------------------------ MULTILIST04 VARCHAR2(765) SQL> SQL> desc manufacturers; Name NULL? TYPE ----------------------------------------- -------- ------ ID NOT NULL NUMBER NAME VARCHAR In table page_two, column multilist04 stores a token string splitted with common. Each token represent a valid ID in manufacturers table. My expectation is to delete invalid token strings from page_two.multilist04, which have no mapping id in manufacturers.id. For example in below SQL result: ,6295728,33,6295729,6295730,6295731,22, , value 33 and 22 are invalid data because there is no ID equals to 33 or 22 in manufacturers table. So I need to delete 33 and 22. SQL> col rowid format a20; SQL> col multilist04 format a50; SQL> select rowid, multilist04 from page_two; ROWID MULTILIST04 -------------------- -------------------------------------------------- AAB+UrADfAAAAhUAAI ,6295728,6295729,6295730,6295731, AAB+UrADfAAAAhUAAJ ,1111,6295728,6295729,6295730,6295731, AAB+UrADfAAAAhUAAK ,6295728,111,6295729,6295730,6295731, AAB+UrADfAAAAhUAAL ,6295728,6295729,6295730,6295731,22, AAB+UrADfAAAAhUAAM ,6295728,33,6295729,6295730,6295731,22, SQL> select id, encode_name from manufacturers where id in (1111,11,22,33); No rows selected SQL> Solution As there is no existing SPLIT function or related in PL/SQL, I should program it by myself. I code Split intermediate function which is used to get the token value between current splitter and next splitter. Next program is main entry point, it get each column value from page_two.multilist04, process each row based on cursor. When it get each multilist04 value, it uses above Split function to get each token string stored to singValue variant, then check if it exists in manufacturers.id. If not found, set fixFlag to 1, pending to be deleted.

    Read the article

  • Cheese won't start

    - by Anthony Hohenheim
    I can't start Cheese Webcam Booth. It starts loading and there is a brief moment when the window shows up but then it disappears, like it shuts itself down and it's not in system monitor. My webcam works perfectly in Skype video call. I installed and run Camorama and it gave me an error: Could not connect to video device (/dev/video0) Please check connection When I run the lsusb I get this line for my webcam: Bus 002 Device 002: ID 04f2:b210 Chicony Electronics Co., Ltd And for my graphic card, running lspci: VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) It's not a pressing matter, but it bugs my nerves, if it works on Skype, why does Cheese and other programs refuse to run. As I said, it's not a big deal but any help would be appreciated. Running Cheese in terminal: (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkHBox to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkHBox to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gdk-WARNING **: The program 'cheese' received an X Window System error. This probably reflects a bug in the program. The error was 'BadDrawable (invalid Pixmap or Window parameter)'. (Details: serial 932 error_code 9 request_code 137 minor_code 9) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the --sync command line option to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.)

    Read the article

  • Running a program on boot without login, using the screen

    - by configurator
    Preface: I have a server running on an old laptop. The screen is always on with a login prompt, but because its keyboard is in pretty bad shape, I use it exclusively via ssh. The screen is in a good position, though; I want to use it to display a clock and some stats about what my server is doing. I have scripts to display all those things, but I want to always show them on the monitor screen. My question is, how do I get my script (called HUD) to run on /dev/tty1, instead of the login prompt. Hopefully, it should be possible to accept keyboard input as well as display its output, so that it can use the keyboard to show more info where needed in a future version. I'd also like tty2 etc. to remain active as login screens, in face I actually do need to login locally. For a start, I tried creating a script that I can run from ssh to start the HUD. It goes something like this: ( flock -n 9 watch --interval 0.2 --precise --color --notitle --exec /path/to/script & disown ) 9> /var/lock/hud > /dev/tty1 2> /dev/tty1 < /dev/tty1 (I had to use & disown instead of nohup because nohup recognized the tty and redirects output to nohup.out instead.) This sort-of works. However, it has a few issues: It doesn't steal the terminal's keyboard input, so you can't ctrl+c to get out of it (nor change the script to actually use the keyboard input), and if you press enter it show it and scrolls the display, never refreshing it correctly afterwards. Oddly, if I disconnect the ssh session which created it, it stops working and shows a message: exec: No such file or directory. If I reconnect to ssh, it resumes functioning properly. It feels hackish. Is there a better way to do this? How?

    Read the article

  • Cannot install wine on Ubuntu natty 64bit [broken dependency]

    - by MHK
    I've just installed Ubuntu natty 64bit. Now I'm trying to install wine and no matter how I do it (Software center/synaptic/terminal), it fails. Here's what I tried on terminal: sudo apt-get update sudo apt-get install wine It shows: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine : Depends: wine1.3 but it is not going to be installed Depends: ia32-libs (>= 1.6) but it is not going to be installed Depends: lib32asound2 (> 1.0.14) but it is not going to be installed Depends: libc6-i386 (>= 2.6-1) but it is not going to be installed Depends: lib32nss-mdns (>= 0.10-3) but it is not going to be installed E: Broken packages Any body faced this? Is this a bug or something is broken on my end? Any hints on how to solve? Edit: I've tried with aptitude, it gives more clear message: sudo aptitude install wine Output: The following NEW packages will be installed: gnome-exe-thumbnailer{a} ia32-libs{a} icoutils{a} imagemagick{a} lib32asound2{ab} lib32bz2-1.0{a} lib32gcc1{ab} lib32ncurses5{a} lib32nss-mdns{a} lib32stdc++6{ab} lib32v4l-0{ab} lib32z1{a} libc6-i386{ab} libcdt4{a} libgraph4{a} libgvc5{a} libilmbase6{a} liblqr-1-0{a} libmagickcore3{a} libmagickcore3-extra{a} libmagickwand3{a} libnetpbm10{a} libopenexr6{a} libpathplan4{a} netpbm{a} ttf-droid{a} ttf-symbol-replacement-wine1.3{a} ttf-umefont{a} winbind{a} wine wine1.3{a} wine1.3-gecko{a} winetricks{a} 0 packages upgraded, 33 newly installed, 0 to remove and 0 not upgraded. Need to get 135 MB of archives. After unpacking 421 MB will be used. The following packages have unmet dependencies: libc6-i386: Depends: libc6 (= 2.12.1-0ubuntu16) but 2.13-0ubuntu13 is installed. lib32gcc1: Depends: gcc-4.5-base (= 4.5.2-2ubuntu3) but 4.5.2-8ubuntu4 is installed. lib32asound2: Depends: libasound2 (= 1.0.23-2.1ubuntu2) but 1.0.24.1-0ubuntu5 is installed. lib32stdc++6: Depends: gcc-4.5-base (= 4.5.2-2ubuntu3) but 4.5.2-8ubuntu4 is installed. lib32v4l-0: Depends: libv4l-0 (= 0.8.1-2) but 0.8.3-1 is installed. The following actions will resolve these dependencies: Keep the following packages at their current version: 1) ia32-libs [Not Installed] 2) lib32asound2 [Not Installed] 3) lib32bz2-1.0 [Not Installed] 4) lib32gcc1 [Not Installed] 5) lib32ncurses5 [Not Installed] 6) lib32nss-mdns [Not Installed] 7) lib32stdc++6 [Not Installed] 8) lib32v4l-0 [Not Installed] 9) lib32z1 [Not Installed] 10) libc6-i386 [Not Installed] 11) wine [Not Installed] 12) wine1.3 [Not Installed] Leave the following dependencies unresolved: 13) wine1.3-gecko recommends wine1.3 14) winetricks recommends wine1.2 | wine1.3 | cxoffice5 | cxgames5 It seems the wine package hasn't been updated in the repo. What should I do now?

    Read the article

  • RAID1: can't replace faulty spare (marked again as 'faulty spare' within seconds)

    - by user212475
    I got a problem that I cannot solve: Our fileserver runs XUbuntu and 3 RAID1s. One has a problem since monday: it consists of sdb and sdc. sdb was marked as faulty by mdadm for unknown reasons. I used --remove to remove it from the RAID and then to add it by --add. All was fine, re-syncing started but never got above 0% and after a few seconds, sdb was again marked as 'faulty spare' (and therefore the RAID degraded, but clean). So I saved the first 512 byte of the old sdb to a file, bought a new HDD of same size (4TB), shut down the computer and replaced sdb physically, switched the computer back on and wrote the 512 byte back to the new drive to have the same partition info as the old drive (both are the same type, from same company). But the new drive shows the same behaviour as the old: I can add, re-syncing starts and after a few seconds its marked as 'faulty spare'. Here exactly what i did: mdadm --remove /dev/md/1 /dev/sdb maadm --detail /dev/md/1 gives me: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:56:13 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2424 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc mdadm --add /dev/md/1 /dev/sdb mdadm --detail /dev/md/1 gives me: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:49 2013 State : clean, degraded, recovering Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Rebuild Status : 0% complete Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2431 Number Major Minor RaidDevice State 2 8 16 0 faulty spare rebuilding /dev/sdb 1 8 32 1 active sync /dev/sdc and after a few seconds: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:50 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2436 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc 2 8 16 - faulty spare /dev/sdb same behaviour if I zero the superblock (mdadm --zero-superblock /dev/sdb) before adding sdb. I do all commands as root and the system holds 3 more 4TB drives, ie the mainboard can handle them. The old harddrive was checked for errors using badblocks, but all is fine. Does anybody have any idea, what the problem is?

    Read the article

  • Plans for Java 7 and E-Business Suite Certification

    - by Steven Chan (Oracle Development)
    As of June 2012, Java 7 has not been certified yet with Oracle E-Business Suite.  EBS customers should continue to run JRE 6 on their Windows end-user desktops, and JDK 6 on their EBS servers. If a search engine has brought you to this article, please check the Certifications summary for our latest certified Java release. Our plans for certifying Java 7 for the E-Business Suite We plan on releasing the Java 7 certification for E-Business Suite customers in two phases: Phase 1: Certify JRE 7 for Windows end-user desktops Phase 2: Certify JDK 7 for server-based components When will Java 7 be certified with EBS? We're working on the first phase now. As usual, I cannot discuss release dates here, but you can monitor or subscribe to this blog for updates. Current known issues with JRE 7 in EBS environments Our current testing shows that there are known incompatibilities between JRE 7 and the Forms-invocation process in EBS environments.  We have been working directly with the Java division on this for a while now.  In the meantime, EBS customers should not deploy JRE 7 to their end-user Windows desktop clients. You should stick with JRE 1.6 for now.  But wait, you previously said... Older JRE certification announcements stated: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and higher.  We test all new JRE releases in parallel with the JRE development process, so all JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE releases to your EBS users' desktops. Yes, this is true.  This standard boilerplate text was written before JRE 7 was released, so there was no possibility of misunderstanding.  With the availability of JRE 7, that boilerplate needs to be revised to read: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • Top down space game control problem

    - by Phil
    As the title suggests I'm developing a top down space game. I'm not looking to use newtonian physics with the player controlled ship. I'm trying to achieve a control scheme somewhat similar to that of FlatSpace 2 (awesome game). I can't figure out how to achieve this feeling with keyboard controls as opposed to mouse controls though. Any suggestions? I'm using Unity3d and C# or javaScript (unityScript or whatever is the correct term) works fine if you want to drop some code examples. Edit: Of course I should describe FlatSpace 2's control scheme, sorry. You hold the mouse button down and move the mouse in the direction you want the ship to move in. But it's not the controls I don't know how to do but rather the feeling of a mix of driving a car and flying an aircraft. It's really well made. Youtube link: FlatSpace2 on iPhone I'm not developing an iPhone game but the video shows the principle of the movement style. Edit 2 As there seems to be a slight interest, I'll post the version of the code I've used to continue. It works good enough. Sometimes good enough is sufficient! using UnityEngine; using System.Collections; public class ShipMovement : MonoBehaviour { public float directionModifier; float shipRotationAngle; public float shipRotationSpeed = 0; public double thrustModifier; public double accelerationModifier; public double shipBaseAcceleration = 0; public Vector2 directionVector; public Vector2 accelerationVector = new Vector2(0,0); public Vector2 frictionVector = new Vector2(0,0); public int shipFriction = 0; public Vector2 shipSpeedVector; public Vector2 shipPositionVector; public Vector2 speedCap = new Vector2(0,0); void Update() { directionModifier = -Input.GetAxis("Horizontal"); shipRotationAngle += ( shipRotationSpeed * directionModifier ) * Time.deltaTime; thrustModifier = Input.GetAxis("Vertical"); accelerationModifier = ( ( shipBaseAcceleration * thrustModifier ) ) * Time.deltaTime; directionVector = new Vector2( Mathf.Cos(shipRotationAngle ), Mathf.Sin(shipRotationAngle) ); //accelerationVector = Vector2(directionVector.x * System.Convert.ToDouble(accelerationModifier), directionVector.y * System.Convert.ToDouble(accelerationModifier)); accelerationVector.x = directionVector.x * (float)accelerationModifier; accelerationVector.y = directionVector.y * (float)accelerationModifier; // Set friction based on how "floaty" controls you want shipSpeedVector.x *= 0.9f; //Use a variable here shipSpeedVector.y *= 0.9f; //<-- as well shipSpeedVector += accelerationVector; shipPositionVector += shipSpeedVector; gameObject.transform.position = new Vector3(shipPositionVector.x, 0, shipPositionVector.y); } }

    Read the article

  • Changing jobs and leaving a project without a leader (aka, me)

    - by AnonUntilAfterTheEvent
    I'm the lead on a project that has been underway for about a year and a half. Two of us have been working on it. One is the database guy. I'm the javascript/ui guy. Which is to say, essentially no overlap in code knowledge. Here's the thing. Someone is about to offer me a sweet job with a nearly 30% bump in pay. Though I am perfectly happy with my current job and love the project, the new one would be better and I can't imagine saying no. The big problem is that my project is supposed to go into production starting in a few weeks. I will consider the new guys to have disqualified the new job by being bad people who would ruin my life if they won't cooperate and let me start after deployment. Since they seem like decent, ethical people, I don't expect that to be a problem. The current project will be brutalized by my absence. I take some comfort in the fact that I have emphatically requested an understudy for at least six months. That puts a little of the responsibility on the boss's head, but still, it's going to be a really bad thing. What do others of you do when you are a critical to a project when it's time to move on? Do I owe any obligation to stick around even though something better shows up? I know my spouse would object if I found someone else. Does that apply to work? I do have an understudy now, though he's fresh out of college. He's not going to replace me anytime soon. It's a small shop and the boss is going to be crushed. I am traumatized in anticipation of telling him and feel guilty about the practical consequences. I'm looking for some solace and some strategy about how to deal with this transition. Thank you for listening. =========================Subsequent notes ========================= @ChaosPandion, Chance: No, I can't stay to finish the project. I will insist on a compromise where I finish the current sprint (about a month from now) but there is at least a half year, probably a year of solid, full-time, work still to be done. I wouldn't expect the new employer to hold the job that long.

    Read the article

  • High CPU load for 1:30 minutes when mounting ext4-raid partition

    - by sirion
    I have a raid 5 (software) with 5x2TB drives. I encrypted the raid with cryptsetup and put an ext4-partition on top. In the beginning opening and mounting the raid took less than 10 seconds, now (for a few weeks) mounting alone takes 1:30 minutes and the cpu stays around 93% the whole time: The output of "time sudo mount /dev/mapper/8000 /media/8000" is: real 1m31.952s user 0m0.008s sys 1m25.229s At the same time only one line is added to /var/log/syslog: kernel: [ 2240.921381] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) My Ubuntu-version is "12.04.1 LTS" and no updates are pending. I checked the partition with fsck, but it says that all is ok. The "cryptsetup luksOpen" command only takes a few seconds. I also tried changing the raid-bitmap (as it was suggested in some forum) but it did not change the behaviour. sudo mdadm --grow /dev/md0 -b internal and sudo mdadm --grow /dev/md0 -b none I had the idea that it might be the hardware being slow, but a read test with "sudo hdparm -t /dev/md0" spit out values between 62 and 159 MB/sec: Timing buffered disk reads: 382 MB in 3.00 seconds = 127.14 MB/sec Timing buffered disk reads: 482 MB in 3.02 seconds = 159.62 MB/sec Timing buffered disk reads: 190 MB in 3.03 seconds = 62.65 MB/sec Timing buffered disk reads: 474 MB in 3.02 seconds = 157.12 MB/sec Although I think it is strange that the read rate jumps by more than 100% - could that mean something? The speed test when reading from the mapped (decrypted) device shows similar behavior, although it is of course much slower. "sudo hdparm -t /dev/mapper/8000": Timing buffered disk reads: 56 MB in 3.02 seconds = 18.54 MB/sec Timing buffered disk reads: 122 MB in 3.09 seconds = 39.43 MB/sec Timing buffered disk reads: 134 MB in 3.02 seconds = 44.35 MB/sec The output of a verbose mount "mount -vvv /dev/mapper/8000 /media/8000" does not help much: mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "/dev/mapper/8000" mount: node: "/media/8000" mount: types: "(null)" mount: opts: "(null)" mount: you didn't specify a filesystem type for /dev/mapper/8000 I will try type ext4 mount: mount(2) syscall: source: "/dev/mapper/8000", target: "/media/8000", filesystemtype: "ext4", mountflags: -1058209792, data: (null) Any idea where I could find additional information on why mounting takes so long, or what additional tests I could run?

    Read the article

  • Determining cause of random latency/loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of css/js files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well. Any help or advice is much appreciated.

    Read the article

  • ArchBeat Link-o-Rama for 10-24-2012

    - by Bob Rhubart
    Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company's position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as "TEN." My score? Never mind. Book: DevOps for Developers | The Java Source The subject of DevOps has come up in a couple of recent OTN ArchBeat Podcasts, so it's somewhat serendipitous that Tori Weildt's recent blog post offers an overview of Java Champion Michael Hutterman's new book, DevOps for Developers, now available from Apress. Bring Your Own Device (BYOD) : Context is everything… | The ORACLE-BASE Blog BOYD is a factor in the evolution of IT, but in what context? "The real IT work in companies is still being done on PCs," says Oracle ACE Director Tim Hall. "Yes, you can use a cloud service on your phone, but look around the office and you will see those cloud services are actually being used by people on PCs." Oracle in the Cloud: Oracle EBusiness Suite sizing | Tom Laszewski Cloud expert Tom Laszewski shares several technical resources that will be helpful for sizing of Oracle EBusiness Suite. Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Author and expert Yuli Vasiliev shows you how take advantage of multiple Oracle WebLogic Server instances grouped into a cluster to maximize scalability and availability. Webcast: Reduce Costs with Oracle's Database Storage Management Watch this! Join Oracle experts Kevin Jernigan and Margaret Hamburger for an interactive webcast in which you'll learn how Oracle's Database Storage Management can reduce storage costs and management complexity while improving query performance to meet service-level agreements and compliance requirements. Event Date: Tuesday, November 6, 2012 Event Time: 10 a.m. PT/1 p.m. ET Thought for the Day "Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves." — Alan Kay Source: softwarequotes.com

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >