Search Results

Search found 16032 results on 642 pages for 'everyday problems'.

Page 429/642 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Dual booting 12.10 and Win 7 - boots directly to Win 7

    - by user110174
    and thank you kindly for you help! I'll preface this with saying that I realize this is a common problem, with lots of trouble-shooting guides available online; however, after multiple attempts with different guides, I've made zero progress and am hoping to someone could help me with my specific scenario. First, my story: -Initially, I installed Ubuntu 12.10 with the "Something Else" option with no problems. Used 4 GB Swap Logical Partition, 26 GB Primary Root Partition. Wanting to trying out Mint 13, I booted into Windows from GRUB2, used the latest version of EasyBCD (v2.2) to restore the Windows 7 bootloader to the MBR, deleted the Ubuntu partitions, reformatted them in NTFS. I then created a 30 GB partition of free space for Mint. I installed Mint using the same partitioning described above for Ubuntu 12.10, using /dev/sda for the boot installation files, and everything seemed to go well, until I re-booted my computer and it went straight to Windows - I could find no way to get into Mint. So I went into windows, restored windows bootloader to the MBR w/ EasyBCD, deleted partitions, etc., as I figured I'd done enough messing around and would go with Ubuntu 12.10. Now the problem: I restarted my computer booting from the same Ubuntu USB key I originally used. Briefly, "error: "prefix" is not set" flashed on screen, and instead of being greeted with the GUI menu of "try vs. install Ubuntu", there was a menu with minimal graphics (like a BIOS menu) where I could select install, run from USB, etc. After selecting "Install Ubuntu", the familiar install wizard with a GUI came up, I partitioned my drive as described, /dev/sda for the boot installation files, install went well, rebooted and...straight to Windows. This is where I'm at. Fixes I've tried: -This guide: How can I repair grub? (How to get Ubuntu back after installing Windows?) to ensure Grub is on the MBR. I followed all steps, but still when I reboot, I go directly into Windows. -Installing 12.04 instead of 12.10 - same issue -Re-installed Ubuntu, writing the boot files to their own partition, then using EasyBCD to to add a boot option for Ubuntu using the Windows bootloader, ensuring I instruct EasyBCD to look at the partition I created with the Ubuntu installer (instructions here http://neosmart.net/wiki/display/EBCD/Ubuntu). When I reboot, I select the Ubuntu option, and it puts me in GRUB4DOS, with a cursor waiting for input. I have no idea what to put here, so I would just type "reboot" to exit out. And this is where I am now. Any clue as to why I can't boot into Ubuntu? My computer specs are: ASUS UX31A Core i7, Win 7 64 Pro, 256 GB SSD, Intel HM76 Chipset and Integrated Intel HD 4000 Graphics, 4 GB memory I've tried to be as clear as possible, but I'd be happy to provide any info that would help anyone along. Thanks for your patience in reading this! Sincerely, -MN

    Read the article

  • Virtual host is not working in Ubuntu 14 VPS using XAMPP 1.8.3

    - by viral4ever
    I am using XAMPP as server in ubuntu 14.04 VPS of digitalocean. I tried to setup virtual hosts. But it is not working and I am getting 403 error of access denied. I changed files too. My files with changes are /opt/lampp/etc/httpd.conf # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/trunk/> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/trunk/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so 'log/access_log' # with ServerRoot set to '/www' will be interpreted by the # server as '/www/log/access_log', where as '/log/access_log' will be # interpreted as '/log/access_log'. # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to specify a local disk on the # Mutex directive, if file-based mutexes are used. If you wish to share the # same ServerRoot for multiple httpd daemons, you will need to change at # least PidFile. # ServerRoot "/opt/lampp" # # Mutex: Allows you to set the mutex mechanism and mutex file directory # for individual mutexes, or change the global defaults # # Uncomment and change the directory if mutexes are file-based and the default # mutex file directory is not on a local disk or is not appropriate for some # other reason. # # Mutex default:logs # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule authn_socache_module modules/mod_authn_socache.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_dbd_module modules/mod_authz_dbd.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule access_compat_module modules/mod_access_compat.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_form_module modules/mod_auth_form.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule allowmethods_module modules/mod_allowmethods.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule cache_module modules/mod_cache.so LoadModule cache_disk_module modules/mod_cache_disk.so LoadModule socache_shmcb_module modules/mod_socache_shmcb.so LoadModule socache_dbm_module modules/mod_socache_dbm.so LoadModule socache_memcache_module modules/mod_socache_memcache.so LoadModule dbd_module modules/mod_dbd.so LoadModule bucketeer_module modules/mod_bucketeer.so LoadModule dumpio_module modules/mod_dumpio.so LoadModule echo_module modules/mod_echo.so LoadModule case_filter_module modules/mod_case_filter.so LoadModule case_filter_in_module modules/mod_case_filter_in.so LoadModule buffer_module modules/mod_buffer.so LoadModule ratelimit_module modules/mod_ratelimit.so LoadModule reqtimeout_module modules/mod_reqtimeout.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule request_module modules/mod_request.so LoadModule include_module modules/mod_include.so LoadModule filter_module modules/mod_filter.so LoadModule substitute_module modules/mod_substitute.so LoadModule sed_module modules/mod_sed.so LoadModule charset_lite_module modules/mod_charset_lite.so LoadModule deflate_module modules/mod_deflate.so LoadModule mime_module modules/mod_mime.so LoadModule ldap_module modules/mod_ldap.so LoadModule log_config_module modules/mod_log_config.so LoadModule log_debug_module modules/mod_log_debug.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule cern_meta_module modules/mod_cern_meta.so LoadModule expires_module modules/mod_expires.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule unique_id_module modules/mod_unique_id.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule version_module modules/mod_version.so LoadModule remoteip_module modules/mod_remoteip.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so LoadModule proxy_scgi_module modules/mod_proxy_scgi.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_express_module modules/mod_proxy_express.so LoadModule session_module modules/mod_session.so LoadModule session_cookie_module modules/mod_session_cookie.so LoadModule session_dbd_module modules/mod_session_dbd.so LoadModule slotmem_shm_module modules/mod_slotmem_shm.so LoadModule ssl_module modules/mod_ssl.so LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so LoadModule unixd_module modules/mod_unixd.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule suexec_module modules/mod_suexec.so LoadModule cgi_module modules/mod_cgi.so LoadModule cgid_module modules/mod_cgid.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so <IfDefine JUSTTOMAKEAPXSHAPPY> LoadModule php4_module modules/libphp4.so LoadModule php5_module modules/libphp5.so </IfDefine> <IfModule unixd_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User root Group www </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:@@Port@@ # XAMPP ServerName localhost # # Deny access to the entirety of your server's filesystem. You must # explicitly permit access to web content directories in other # <Directory> blocks below. # <Directory /> AllowOverride none Require all denied </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/opt/lampp/htdocs" <Directory "/opt/lampp/htdocs"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/trunk/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks # XAMPP Options Indexes FollowSymLinks ExecCGI Includes # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride None # since XAMPP 1.4: AllowOverride All # # Controls who can get stuff from this server. # Require all granted </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> #DirectoryIndex index.html # XAMPP DirectoryIndex index.html index.html.var index.php index.php3 index.php4 </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ".ht*"> Require all denied </Files> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "logs/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "logs/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "logs/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/opt/lampp/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock logs/cgisock </IfModule> # # "/opt/lampp/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/opt/lampp/cgi-bin"> AllowOverride None Options None Require all granted </Directory> <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig etc/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # XAMPP, since LAMPP 0.9.8: AddHandler cgi-script .cgi .pl # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # # XAMPP AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile etc/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or one of the special # values 'default', 'none' or 'unlimited'. # Default setting is to accept 200 Ranges. #MaxRanges unlimited # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall may be used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # Defaults: EnableMMAP On, EnableSendfile Off # EnableMMAP off EnableSendfile off # Supplemental configuration # # The configuration files in the etc/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) #Include etc/extra/httpd-mpm.conf # Multi-language error messages Include etc/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include etc/extra/httpd-autoindex.conf # Language settings #Include etc/extra/httpd-languages.conf # User home directories #Include etc/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include etc/extra/httpd-info.conf # Virtual hosts Include etc/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include etc/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include etc/extra/httpd-dav.conf # Various default settings Include etc/extra/httpd-default.conf # Configure mod_proxy_html to understand HTML4/XHTML1 <IfModule proxy_html_module> Include etc/extra/proxy-html.conf </IfModule> # Secure (SSL/TLS) connections <IfModule ssl_module> # XAMPP <IfDefine SSL> Include etc/extra/httpd-ssl.conf </IfDefine> </IfModule> # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> # XAMPP Include etc/extra/httpd-xampp.conf Include "/opt/lampp/apache2/conf/httpd.conf" I used command shown in this example. I used below lines to change and add group Add group "groupadd www" Add user to group "usermod -aG www root" Change htdocs group "chgrp -R www /opt/lampp/htdocs" Change sitedir group "chgrp -R www /opt/lampp/htdocs/mysite" Change htdocs chmod "chmod 2775 /opt/lampp/htdocs" Change sitedir chmod "chmod 2775 /opt/lampp/htdocs/mysite" And then I changed my vhosts.conf file # Virtual Hosts # # Required modules: mod_log_config # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.4/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host.example.com" ServerName dummy-host.example.com ServerAlias www.dummy-host.example.com ErrorLog "logs/dummy-host.example.com-error_log" CustomLog "logs/dummy-host.example.com-access_log" common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host2.example.com" ServerName dummy-host2.example.com ErrorLog "logs/dummy-host2.example.com-error_log" CustomLog "logs/dummy-host2.example.com-access_log" common </VirtualHost> NameVirtualHost * <VirtualHost *> ServerAdmin [email protected] DocumentRoot "/opt/lampp/htdocs/mysite" ServerName mysite.com ServerAlias mysite.com ErrorLog "/opt/lampp/htdocs/mysite/errorlogs" CustomLog "/opt/lampp/htdocs/mysite/customlog" common <Directory "/opt/lampp/htdocs/mysite"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all Require all granted </Directory> </VirtualHost> but still its not working and I am getting 403 error on my ip and domain however I can access phpmyadmin. If anyone can help me, please help me.

    Read the article

  • CCNet TFS Migration - Dealing with left over folders

    - by Michael Stephenson
    Im currently in the process of migrating our many BizTalk projects from MKS source control to TFS.  While we will be using TFS for work item tracking and source control etc we will be continuing to use Cruise Control for continuous integration although im updating this to CCNet 1.5 at the same time. Ill post a few things as much as a reminder to myself about some of the problems we come across. Problem After the first build of our code the next time a build is triggered an error is encountered by the TFS source control block refreshing the source code. System.IO.IOException: The directory is not empty.    at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive)    at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) System.IO.IOException: The directory is not empty. at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive) at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) Project: Bupa.BPI.Documents Date of build: 2011-01-28 14:54:21 Running time: 00:00:05 Integration Request: Build (ForceBuild) triggered from VMOPBZDEV11 Solution The problem seems to be with a folder called TestLocations which is created by the build process and used along with the file adapter as a way to get messages into BizTalk.  For some reason the source control block when it does a full refresh of the code does not get rid of this folder and then complains thats a problem and fails the build. Interestingly there are other folders created by the build which are deleted fine.  My assumption is that this if something to do with the file adapter polling the directory.  However note that we have not had this problem with other source control blocks in the past. To workaround this I have added a prebuild task to the ccnet.config file to delete this folder before the source control block is executed.  See below for example < prebuild> exec>executable>cmd.exe</executable>buildArgs>/c "if exist "C:\<MyCode>\TestLocations" rd /s /q "C:\<MyCode>\TestLocations""</buildArgs>exec> prebuild> < < < </ </

    Read the article

  • How do I (tactfully) tell my project manager or lead developer that the project's codebase needs serious work?

    - by Adam Maras
    I just joined a (relatively) small development team that's been working on a project for several months, if not a year. As with most developer joining a project, I spent my first couple of days reviewing the project's codebase. The project (a medium- to large-sized ASP.NET WebForms internal line of business application) is, for lack of a more descriptive term, a disaster. There are three immediately noticeable problems with the coding standards: The standard is very loose. It describes more of what not to do (don't use Hungarian notation, etc..) than what to do. The standard isn't always followed. There are inconsistencies with the code formatting everywhere. The standard doesn't follow Microsoft's style guidelines. In my opinion, there's no value in deviating from the guidelines that were set forth by the developer of the framework and the largest contributor to the language specification. As for point 3, perhaps it bothers me more because I've taken the time to get my MCPD with a focus on web applications (specifically, ASP.NET). I'm also the only Microsoft Certified Professional on the team. Because of what I learned in all of my schooling, self-teaching, and on-the-job learning (including my preparation for the certification exams) I've also spotted several instances in the project's code where things are simply not done in the best way. I've only been on this team for a week, but I see so many issues with their codebase that I imagine I'll be spending more time fighting with what's already written to do things in "their way" than I would if I were working on a project that, for example, followed more widely accepted coding standards, architecture patterns, and best practices. This brings me to my question: Should I (and if so, how do I) propose to my project manager and team lead that the project needs to be majorly renovated? I don't want to walk into their office, waving my MCTS and MCPD certificates around, saying that their project's codebase is crap. But I also don't want to have to stay silent and have to write kludgey code atop their kludgey code, because I actually want to write quality software and I want the end product to be stable and easily maintainable.

    Read the article

  • Silverlight Cream for June 16, 2010 -- #884

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Emiel Jongerius, Charles Petzold, Adam Kinney, Deepesh Mohnani, Timmy Kokke, and Damon Payne. Shoutouts: Andy Beaulieu reported his Coding4Fun: Shuffleboard Game for WP7 has been posted -- Big ol' Tutorial and 6 videos of WP7 goodness Karl Shifflett announced Three New WPF and Silverlight Designer Videos Posted Charles Petzold has a cool Flip-Number Clock in Silverlight posted... cool demo, and the source. From SilverlightCream.com: Data Driven Applications with MVVM Part II: Messaging, Unit Testing, and Live Data Sources Zoltan Arvai has part 2 of his Data-Driven Apps with MVVM up, and this one is also including Messaging, Unit Testing, and Live WCF Data... good tutorial and all the code. Silverlight DataContext Changed Event and Trigger Emiel Jongerius takes a hard swing at the lack of DataContextChanged... his solution involves two attached properties instead of one... check it out and see what you think! Orientation Strategies for Windows Phone 7 Charles Petzold is discussing WP7 Orientation... showing the problems you can get involved in, and how to work through them... and you might be surprised at how he does it :) ... pretty cool as usual, Charles! Debugging the TranslateZoomRotate WPF Behavior in Blend Adam Kinney talks through a bug reported about the WPF TranslateZoomRotate Behavior. Again, it's WPF, but it's in Blend, and ya never know when the solution might apply. I want my app to look like the Zune client Deepesh Mohnani demonstrates using the Cosmopolitan theme to get his app to have the same look as the Zune client. MVVM Project and Item Templates Timmy Kokke is continuing with his cool SilverAmp media player, using it to expand upon the new Blend and Silverlight 4 features. This episode touches very lightly on cranking up a new MVVM project in Blend. Great Features for MVVM Friendly Objects Part 0: Favor Composition Over Inheritance Damon Payne has the first part up of a series he's working on with 'MVVM Friendly' features... he's building out a lot of the infrastructure in this post for the ones that follow... all good stuff. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Project Euler 18: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 18.  As always, any feedback is welcome. # Euler 18 # http://projecteuler.net/index.php?section=problems&id=18 # By starting at the top of the triangle below and moving # to adjacent numbers on the row below, the maximum total # from top to bottom is 23. # # 3 # 7 4 # 2 4 6 # 8 5 9 3 # # That is, 3 + 7 + 4 + 9 = 23. # Find the maximum total from top to bottom of the triangle below: # 75 # 95 64 # 17 47 82 # 18 35 87 10 # 20 04 82 47 65 # 19 01 23 75 03 34 # 88 02 77 73 07 63 67 # 99 65 04 28 06 16 70 92 # 41 41 26 56 83 40 80 70 33 # 41 48 72 33 47 32 37 16 94 29 # 53 71 44 65 25 43 91 52 97 51 14 # 70 11 33 28 77 73 17 78 39 68 17 57 # 91 71 52 38 17 14 91 43 58 50 27 29 48 # 63 66 04 68 89 53 67 30 73 16 69 87 40 31 # 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23 # NOTE: As there are only 16384 routes, it is possible to solve # this problem by trying every route. However, Problem 67, is the # same challenge with a triangle containing one-hundred rows; it # cannot be solved by brute force, and requires a clever method! ;o) import time start = time.time() triangle = [ [75], [95, 64], [17, 47, 82], [18, 35, 87, 10], [20, 04, 82, 47, 65], [19, 01, 23, 75, 03, 34], [88, 02, 77, 73, 07, 63, 67], [99, 65, 04, 28, 06, 16, 70, 92], [41, 41, 26, 56, 83, 40, 80, 70, 33], [41, 48, 72, 33, 47, 32, 37, 16, 94, 29], [53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14], [70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57], [91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48], [63, 66, 04, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31], [04, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 04, 23]] # Loop through each row of the triangle starting at the base. for a in range(len(triangle) - 1, -1, -1): for b in range(0, a): # Get the maximum value for adjacent cells in current row. # Update the cell which would be one step prior in the path # with the new total. For example, compare the first two # elements in row 15. Add the max of 04 and 62 to the first # position of row 14.This provides the max total from row 14 # to 15 starting at the first position. Continue to work up # the triangle until the maximum total emerges at the # triangle's apex. triangle [a-1][b] += max(triangle [a][b], triangle [a][b+1]) print triangle [0][0] print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • ScrollyFox Provides Automated Page Scrolling in Firefox

    - by Asian Angel
    Do you read a high amount of content each day on the web but get tired of manually scrolling through everything? Now you can set up relaxed pace auto-scrolling in Firefox with the ScrollyFox extension. Note: You may occasionally encounter a website where the extension will not work. This may be due to the particular website’s coding. Using ScrollyFox Once you have the extension installed you may want to have a quick look at the preferences. The default scroll speed is set at “50” and the reverse scrolling setting is enabled. You can easily adjust the settings for speed to suit your needs. Note: For our examples we left the reverse scrolling setting enabled. By default the extension is disabled at first and the status bar button will have a faded coloration. You can see what the button looks like once activated…notice the small arrow type buttons on the right side. In our first example you can see the webpage auto-scrolling in a downward direction. Having reached the bottom it automatically started scrolling back towards the top. Visiting the How-To Geek website you can see that the extension was already working as the page was finishing loading. Going up! Conclusion While this extension may not be for everyone, it can be useful for those who have heavy reading and/or very long articles to read. Links Download the ScrollyFox extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Fixing Firefox Scrolling Problems with Dell Synaptics TouchpadQuick Hits: 11 Firefox Tab How-TosDisable That Irritating AutoScroll Feature in FirefoxEnjoy Customizable Smooth Scrolling in Firefox with SmoothWheelQuick Tip: Disable Firefox Tab Scrolling TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

  • Camera rotation - First Person Camera using GLM

    - by tempvar
    I've just switched from deprecated opengl functions to using shaders and GLM math library and i'm having a few problems setting up my camera rotations (first person camera). I'll show what i've got setup so far. I'm setting up my ViewMatrix using the glm::lookAt function which takes an eye position, target and up vector // arbitrary pos and target values pos = glm::vec3(0.0f, 0.0f, 10.0f); target = glm::vec3(0.0f, 0.0f, 0.0f); up = glm::vec3(0.0f, 1.0f, 0.0f); m_view = glm::lookAt(pos, target, up); i'm using glm::perspective for my projection and the model matrix is just identity m_projection = glm::perspective(m_fov, m_aspectRatio, m_near, m_far); model = glm::mat4(1.0); I send the MVP matrix to my shader to multiply the vertex position glm::mat4 MVP = camera->getProjection() * camera->getView() * model; // in shader gl_Position = MVP * vec4(vertexPos, 1.0); My camera class has standard rotate and translate functions which call glm::rotate and glm::translate respectively void camera::rotate(float amount, glm::vec3 axis) { m_view = glm::rotate(m_view, amount, axis); } void camera::translate(glm::vec3 dir) { m_view = glm::translate(m_view, dir); } and i usually just use the mouse delta position as the amount for rotation Now normally in my previous opengl applications i'd just setup the yaw and pitch angles and have a sin and cos to change the direction vector using (gluLookAt) but i'd like to be able to do this using GLM and matrices. So at the moment i have my camera set 10 units away from the origin facing that direction. I can see my geometry fine, it renders perfectly. When i use my rotation function... camera->rotate(mouseDeltaX, glm::vec3(0, 1, 0)); What i want is for me to look to the right and left (like i would with manipulating the lookAt vector with gluLookAt) but what's happening is It just rotates the model i'm looking at around the origin, like im just doing a full circle around it. Because i've translated my view matrix, shouldn't i need to translate it to the centre, do the rotation then translate back away for it to be rotating around the origin? Also, i've tried using the rotate function around the x axis to get pitch working, but as soon as i rotate the model about 90 degrees, it starts to roll instead of pitch (gimbal lock?). Thanks for your help guys, and if i've not explained it well, basically i'm trying to get a first person camera working with matrix multiplication and rotating my view matrix is just rotating the model around the origin.

    Read the article

  • Loading class instance from XML with Texture2D

    - by Thegluestickman
    I'm having trouble with XML and XNA. I want to be able to load weapon settings through XML to make my weapons easier to make and to have less code in the actual project file. So I started out making a basic XML document, something to just assign variables with. But no matter what I changed it gave me a new error every time. The code below gives me a "XML element 'Tag' not found", I added and it started to say the variables weren't found. What I wanted to do in the XML file as well, was load a texture for the file too. So I created a static class to hold my texture values, then in the Texture tag of my XML document I would set it to that instance too. I think that's were the problems are occuring because that's where the "XML element 'Tag' not found" error is pointing me too. My XML document: <XnaContent> <Asset Type="ConversationEngine.Weapon"> <weaponStrength>0</weaponStrength> <damageModifiers>0</damageModifiers> <speed>0</speed> <magicDefense>0</magicDefense> <description>0</description> <identifier>0</identifier> <weaponTexture>LoadWeaponTextures.ironSword</weaponTexture> </Asset> </XnaContent> My Class to load the weapon XML: public class Weapon { public int weaponStrength; public int damageModifiers; public int speed; public int magicDefense; public string description; public string identifier; public Texture2D weaponTexture; } public static class LoadWeaponXML { static Weapon Weapons; public static Weapon WeaponLoad(ContentManager content, int id) { Weapons = content.Load<Weapon>(@"Weapons/" + id); return Weapons; } } public static class LoadWeaponTextures { public static Texture2D ironSword; public static void TextureLoad(ContentManager content) { ironSword = content.Load<Texture2D>("Sword"); } } I'm not entirely sure if you can load textures through XML, but any help would be greatly appreciated.

    Read the article

  • Simple tips to design a Customer Journey Map

    - by Isabel F. Peñuelas
    “A model can abstract to a level that is comprehensible to humans, without getting lost in details.” -The Unified Modeling Language Reference Manual. Inception using Post-it, StoryBoards, Lego or Mindmaping Techniques The first step in a Customer Experience project is to describe customer interactions creating a customer journey map. Modeling is never easy, so to succeed on this effort, it is very convenient that your CX´s team have some “abstract thinking” skills. Besides is very helpful to consult a Business Service Design offered by an Interactive Agency to lead your inception process. Initially, you may start by a free discussion using post-it cards; storyboards; even lego or any other brainstorming technique you like. This will help you to get your mind into the path followed by the customer to purchase your product or to consume any business service you actually offer to your customers, or plan to offer in the near future. (from www.servicedesigntools.org) Colorful Mind Maps are very useful to document and share meeting ideas. Some Mind Maps software providers as ThinkBuzzan provide trial versions, and you will find more mindmapping options on this post by Mashable. Finally to produce a quick one, I do recommend Wise, an entirely online mindmaping service. On my view the best results in terms of communication will always come for an artistic hand-made drawing. Customer Experience Mind Map Example Making your first Customer Journey Map To add some more formalization to your thoughts, there is a wide offering for designing Customer Journey Maps. A Customer Map can be represented as an oriented graph in which another follows each step. The one below is the most simple Customer Journey you can draw. Nothing more than a couple of pictures, numbers and lines to design the customer steps sequence in the purchase process. Very simple Customer Journey for Social Mobile Shopping There are a lot of Customer Journey templates much more sophisticated available  in the Web using a variety of styles, as per example this one with a focus on underlining emotional experience, or this other worksheet template. Representing different interaction devices on the vertical axis, and touchpoints / requirements and existing gaps horizontally  is today´s most common format for Customer Journeys. From Customer Journey Maps to CX Technology Adoption Plans Once you have your map ready, you can start to identify the IT infrastructure requirements for your CXProject. By analyzing customer problems and improvement opportunities with maps, you will then identify the technology gaps and the new investment requirements in your IT infrastructure. Deeping step by step from the more abstract to the more concrete is the best guarantee to take the right IT investment decisions.  ¡Remember to keep your initial customer journey safe on your pocket in every one of your CX´s project meetings- that´s you map to success!

    Read the article

  • Package system broken - E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by delha
    After installing some packages and libraries I have an error on Package Manager, I can't run any update because it says: "The package system is broken If you are using third party repositories then disable them, since they are a common source of problems. Now run the following command in a terminal: apt-get install -f " I've tried to do what it says and it returns me: jara@jara-Aspire-5738:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libcaca-dev libopencv2.3-bin nite-dev python-bluez ps-engine libslang2-dev python-sphinx ros-electric-geometry-tutorials ros-electric-geometry-visualization python-matplotlib libzzip-dev ros-electric-orocos-kinematics-dynamics ros-electric-physics-ode libbluetooth-dev libaudiofile-dev libassimp2 libnetpbm10-dev ros-electric-laser-pipeline python-epydoc ros-electric-geometry-experimental libasound2-dev evtest python-matplotlib-data libyaml-dev ros-electric-bullet ros-electric-executive-smach ros-electric-documentation libgl2ps0 libncurses5-dev ros-electric-robot-model texlive-fonts-recommended python-lxml libwxgtk2.8-dev daemontools libxxf86vm-dev libqhull-dev libavahi-client-dev ros-electric-geometry libgl2ps-dev libcurl4-openssl-dev assimp-dev libusb-1.0-0-dev libopencv2.3 ros-electric-diagnostics-monitors libsdl1.2-dev libjs-underscore libsdl-image1.2 tipa libusb-dev libtinfo-dev python-tz python-sip libfltk1.1 libesd0 libfreeimage-dev ros-electric-visualization x11proto-xf86vidmode-dev python-docutils libvtk5.6 ros-electric-assimp x11proto-scrnsaver-dev libnetcdf-dev libidn11-dev libeigen3-dev joystick libhdf5-serial-1.8.4 ros-electric-joystick-drivers texlive-fonts-recommended-doc esound-common libesd0-dev tcl8.5-dev ros-electric-multimaster-experimental ros-electric-rx libaudio-dev ros-electric-ros-tutorials libwxbase2.8-dev ros-electric-visualization-common python-sip-dev ros-electric-visualization-tutorials libfltk1.1-dev libpulse-dev libnetpbm10 python-markupsafe openni-dev tk8.5-dev wx2.8-headers freeglut3-dev libavahi-common-dev python-roman python-jinja2 ros-electric-robot-model-visualization libxss-dev libqhull5 libaa1-dev ros-electric-eigen freeglut3 ros-electric-executive-smach-visualization ros-electric-common-tutorials ros-electric-robot-model-tutorials libnetcdf6 libjs-sphinxdoc python-pyparsing libaudiofile0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libcv-dev The following NEW packages will be installed libcv-dev 0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded. 2 not fully installed or removed. Need to get 0 B/3,114 kB of archives. After this operation, 11.1 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 261801 files and directories currently installed.) Unpacking libcv-dev (from .../libcv-dev_2.1.0-7build1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/libcv-dev_2.1.0-7build1_amd64.deb (-- unpack): trying to overwrite '/usr/bin/opencv_haartraining', which is also in package libopencv2.3-bin 2.3.1+svn6514+branch23-12~oneiric dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libcv-dev_2.1.0-7build1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried everything people recommend on internet like: sudo apt-get clean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install Also I've tried to install the synaptic manager but it doesn't let me install anything.. As you can see nothing works so I'm desperate! I'm using ubuntu 11.10, 64 bits Thanks!!

    Read the article

  • How do I keep my Huawei E3131 3G modem from unmounting?

    - by John Perez
    I need help getting my Huawei E3131 modem to (consistently) work. I am currently running Ubuntu 12.04.2 and 3G dongles have worked many times before. However, my newly acquired Huawei E3131 is causing problems. When plugged in, Ubuntu detects the device as a CD-ROM and a modem. I can browse the dongle's contents using Nautilus and Network Manager is able to configure and work the dongle out-of-the-box. I can even get to surf. However, within minutes, the connection drops and the CD-ROM is unmounted. I wait about 15 seconds, then the CD-ROM mounts again and Network Manager is able to connect again with short-lived surfing. Rinse, lather and repeat. It's strange that the device mounts as a CD-ROM, but works as a modem, too, suggesting that mode switching happens somewhere. It's not a signal coverage problem, either because I tested the same SIM card using 3 other different dongles (2 ZTEs and another Huawei) and it is only this E3131 that has this problem. If pertinent, the other dongles weren't being detected as CD-ROMs. Output of lsusb Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 0ac8:c342 Z-Star Microelectronics Corp. Bus 001 Device 010: ID 12d1:1506 Huawei Technologies Co., Ltd. E398 LTE/UMTS/GSM Modem/Networkcard Bus 001 Device 004: ID 0a5c:219b Broadcom Corp. Bluetooth 2.1 Device I tried installing the drivers found here, but no fish. Furthermore, my device is an E3131, but lsusb for some reason detects it as an E398. I'm not sure how that plays a role for this problem, though. I hope someone out there can help me. I'm out of ideas already. Thank you very much!

    Read the article

  • Unable to update/ install any files [closed]

    - by Surya
    Possible Duplicate: “Problem with MergeList” error when trying to do an update Just now I installed ubuntu 12.04 on my Lenovo G570 laptop. First I got an error at the time of installation (don't know about it) and I restarted the system and next time, it went well. So, after installing problems started.. There was a error with "Language recognition" and I tried to fix it but didn't work. I tried to install powerTop to check the status of power management. at terminal: sudo apt-get install powertop This is the error I got surya@surya-Lenovo-G570:~$ sudo apt-get powertop install [sudo] password for surya: E: Invalid operation powertop surya@surya-Lenovo-G570:~$ sudo apt-get install powertop Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages E: The package lists or status file could not be parsed or opened. surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ I downloaded Google Chrome .deb one and tried to install but its not working. Software center is opened and its not loading. There was a notification on the status bar which says: An error occurred please run the package manager from the right-click menu ... .... ... E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages "Copy & Paste" from terminal is not really working... When I press Ctrl + C; its showing ^C on terminal but its not working.. The most important error: I am unable to see a "chip" icon on the status bar so as to install proprietary drivers for my ATI drivers... The interesting part is, powertop worked will on live cd and it even detected my ATI card. Update When I opened "Software Up to Date", this showed a error: Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages, E:The package lists or status file could not be parsed or opened.' : My laptop details Lenovo G570; Intel 2nd Gen i5 processor 4GB DDR3 RAM Intel in-build graphics + AMD Radeon HD 6370M 1GB graphics. I need help ASAP.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Grub2 -- Dualboot Ubuntu LTS 12.04 and Windows 7 -- Detects two Windows 7 (loader) entries

    - by DarkIron112
    this is the first question I have ever asked the Ubuntu Community. :D I'm fairly new to Ubuntu, but I understand the basics and know how to navigate the Terminal. I also know how to ask for/research my problems before asking for/ help. I have scoured the internet high and low and learned much of how Grub2 works. But nothing has helped me to solve my problem. My problem is this: I have a computer that has three hard drives. It previously had Windows XP, but I upgraded to Windows 7. I also installed Ubuntu 12.04 LTS (Precise Pangolin). During my installation of Windows 7, there was a failure and I had to restart the installation. Afterwards, I installed Ubuntu. After some trouble removing all traces of the XP OS (Ubuntu auto-detected it, but not Windows 7) I got the two OSes working flawlessly. Or, almost. When booting up, Grub2 used to display Ubuntu, Ubuntu Recovery Mode, Other Versions of Linux, memtest, followed by "Windows 7 (loader) on /dev/sda1" and "Windows 7 (loader) on /dev/sdb1". I eventually removed Recovery Mode, Other Versions, and Memtest. Now, when I run: sudo update-grub I get this print-out: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-26-generic Found initrd image: /boot/initrd.img-3.2.0-26-generic Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sdb1 I would like to remove "Windows 7 (loader) on /dev/sda1", as it is a broken entry that shouldn't exist, and must have been installed during my first Windows 7 attempt. I cannot find a Windows 7 entry in /etc/grub.d... And I don't know where to look. Here is a layout of my hard drives: /dev/sda1/ (1.82 TiB), NTFS ("Media") /dev/sdb1/ (100 Mib), NTFS ("System Reserved") /dev/sdb2/ (149 GiB), NTFS ("Windows 7") /dev/sdb3/ (149 GiB), Extended (" ") /dev/sdb4/ (145 GiB), ext4 (" ") /dev/sdb5/ (4 GiB), linux-swap (" ") /dev/sdc1/ (488.28 GiB), NTFS ("Downloads") /dev/sdc2/ (488.28 GiB), NTFS ("AltMedia") /dev/sdc3/ (886.45 GiB), NTFS ("Personal") unallocated (2.09 MiB), unallocated What I think has happened: Windows 7 installed first and badly. I installed it again. First, there was Windows XP to guide where the bootloader went to so it was put on /dev/sdb1/. But, the second time no such guide existed so the machine put another bootloader on /dev/sda1/. sda1, by the way, is the only partition on a 2TB drive. No boot record partition appears to exist according to gedit. I'm not sure where Grub2 is getting this information from. But, there it is. Is there anything somebody can do to help me? Or, is there any more information I should add? Thank you, community!

    Read the article

  • Sqlite &amp; Entity Framework 4

    - by Dane Morgridge
    I have been working on a few client app projects in my spare time that need to persist small amounts of data and have been looking for an easy to use embedded database.  I really like db4o but I'm not wanting to open source this particular project so it was not an option.  Then I remembered that there was an ADO.NET provider for sqlite.  Being a fan of sqlite in general, I downloaded it and gave it an install.  The installer added tooling support for both Visual Studio 2008 & 2010 which is nice because I am working almost exclusively in 2010 at the moment.  I noticed that the provider also had support for Entity Framework, but not specifically v4.  I created a database using the tools that get installed with Visual Studio and all seemed to work fine.  I went on to create an Entity Framework context and selected the sqlite database and to my surprise it worked with out any problems.  The model showed up just like it would for any database and so I started to write a little code to test and then.. BAM!.. Exception. "Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information." A quick bit of searching on Bing found the answer.  To get it working, you need to include the following code in your web.config file: 1: <startup useLegacyV2RuntimeActivationPolicy="true"> 2: <supportedRuntime version="v4.0" /> 3: </startup> And then everything magically works.  Entity Framework 4 features worked, like lazy loading and even the POCO templates worked.  The only thing that didn't work was the model first development.  The SQL generated was for SQL Server and of course wouldn't run on sqlite without some modifications. The only other oddity I found was that in order to have an auto incrementing id, you have to use the full integer data type for sqlite; a regular int won't do the trick.  This translates to an Int64, or a long when working with it in Entity Framework.  Not a big deal, but something you need to be aware of. All in all, I am quite impressed with the Entity Framework support I found with sqlite.  I wasn't really expecting much at all, and I was pleasantly surprised. I downloaded the ADO.NET sqlite provider from http://sqlite.phxsoftware.com/.  If you want to use an embedded database with Entity Framework, give it a look.  It will be well worth your time.

    Read the article

  • TechEd 2012: Day 3 &ndash; Morning TFS

    - by Tim Murphy
    My morning sessions for day three were dominated by Team Foundation Server.  This has been a hot topic for our clients lately, so this topic really stuck a chord. The speaker for the first session was from Boeing.  It was nice to hear how how a company mixes both agile and waterfall project management.   The approaches that he presented were very pragmatic.  For their needs reporting is the crucial part of their decision to use TFS.  This was interesting since this is probably the last aspect that most shops would think about. The challenge of getting users to adopt TFS was brought up by the audience.  As with the other discussion point he took a very level headed stance.  The approach he was prescribing was to eat the elephant a bite at a time instead of all at once.  If you try to convert you entire shop at once the culture shock will most likely kill the effort. Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS.  If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt. Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks.  They got to this point by doing an industry evaluation.  Although TFS came out on top he said that it still has a big gap is in the Java area.  Of course in this market there are vendors helping to close that gap. The second session was on how continuous feedback in agile is a new focus in VS2012.  The problems they intended to address included cycle time and average time to repair, root cause analysis. The speakers fired features at us as if they were firing a machine gun.  I will just say that I am looking forward to digging into the product after seeing this presentation.  Beyond that I will simply list some of the key features that caught my attention. Feature – Ability to link documents into tasks as artifacts Web access portal PowerPoint storyboards Exploratory testing Request feedback (allows users to record notes, screen shots and video/audio) See you after the second half. del.icio.us Tags: TechEd,TechEd 2012,TFS,Team Foundation Server

    Read the article

  • More details on America's Cup use of Oracle Data Mining

    - by charlie.berger
    BMW Oracle Racing's America's Cup: A Victory for Database Technology BMW Oracle Racing's victory in the 33rd America's Cup yacht race in February showcased the crew's extraordinary sailing expertise. But to hear them talk, the real stars weren't actually human. "The story of this race is in the technology," says Ian Burns, design coordinator for BMW Oracle Racing. Gathering and Mining Sailing DataFrom the drag-resistant hull to its 23-story wing sail, the BMW Oracle USA trimaran is a technological marvel. But to learn to sail it well, the crew needed to review enormous amounts of reliable data every time they took the boat for a test run. Burns and his team collected performance data from 250 sensors throughout the trimaran at the rate of 10 times per second. An hour of sailing alone generates 90 million data points.BMW Oracle Racing turned to Oracle Data Mining in Oracle Database 11g to extract maximum value from the data. Burns and his team reviewed and shared raw data with crew members daily using a Web application built in Oracle Application Express (Oracle APEX). "Someone would say, 'Wouldn't it be great if we could look at some new combination of numbers?' We could quickly build an Oracle Application Express application and share the information during the same meeting," says Burns. Analyzing Wind and Other Environmental ConditionsBurns then streamed the data to the Oracle Austin Data Center, where a dedicated team tackled deeper analysis. Because the data was collected in an Oracle Database, the Data Center team could dive straight into the analytics problems without having to do any extract, transform, and load processes or data conversion. And the many advanced data mining algorithms in Oracle Data Mining allowed the analytics team to build vital performance analytics. For example, the technology team could remove masking elements such as environmental conditions to give accurate data on the best mast rotation for certain wind conditions. Without the data mining, Burns says the boat wouldn't have run as fast. "The design of the boat was important, but once you've got it designed, the whole race is down to how the guys can use it," he says. "With Oracle database technology we could compare the incremental improvements in our performance from the first day of sailing to the very last day. With data mining we could check data against the things we saw, and we could find things that weren't otherwise easily observable and findable."

    Read the article

  • How can I get my progress reviewed as a solo junior developer

    - by Oliver Hyde
    I am currently working for a 2 person company, as the solo primary developer. My boss gets the clients, mocks up some png design templates and hands them over to me. This system has been working fine and i'm really enjoying it. The types of projects I work on are for small - medium sized businesses and they usually want a CMS system. Developed from scratch i'll build a customised backend for the client to add/edit/remove categories, tags, products etc and then output them to the front end according to the design template handed to me. As time has gone on, the projects have increased in complexity, with shopping cart / ordering features and other common e-commerce type features. Again, this system has been working fine and i'm really enjoying it. My issue is my personal development as a programmer. I spend a lot of my spare time reading programming blogs, checking through stackexchange, reading suggested programming books (currently on 'The Pragmatic Programmer', really good so far), doing brain exercises (lumosity.com and khanacademy math problems), doing lots of physical exercise and other personal development type activities. I can't help but feel though, that I'm missing out on feedback, critique. My boss is great and never holds back on praise in regards to my work, but he unfortunately is either to busy to check my code, or to be honest, I don't think it's one of his specialties and so can't provide feedback. I want to know what i'm doing wrong and what i'm doing right. Should I be putting that much logic in the controller, am I modulating my code enough etc. So what I have done is developed a little 'Family Budgeting' app and tried to do it as cleanly and effectively as I currently know how. What i'm wanting to know is, is there somewhere I can submit this app, and have some seasoned developers provide feedback. It's not just a subsection of my code like 'codereview.stackexchange' appears to require, it's my entire workflow that I want critiqued. I know this is a lot to ask, and I expect the main advice given will be to look for a job within a team, which is certainly something I will look into later down the track, but for now I want to persist with my current employment situation, but just don't want to develop too many bad habits. Let me know if I can provide any further information to help clarify, or if this isn't the right place for this type of question I apologise in advance. Didn't want to use reddit as I felt this community fosters more well thought out responses.

    Read the article

  • Architecture strategies for a complex competition scoring system

    - by mikewassmer
    Competition description: There are about 10 teams competing against each other over a 6-week period. Each team's total score (out of a 1000 total available points) is based on the total of its scores in about 25,000 different scoring elements. Most scoring elements are worth a small fraction of a point and there will about 10 X 25,000 = 250,000 total raw input data points. The points for some scoring elements are awarded at frequent regular time intervals during the competition. The points for other scoring elements are awarded at either irregular time intervals or at just one moment in time. There are about 20 different types of scoring elements. Each of the 20 types of scoring elements has a different set of inputs, a different algorithm for calculating the earned score from the raw inputs, and a different number of total available points. The simplest algorithms require one input and one simple calculation. The most complex algorithms consist of hundreds or thousands of raw inputs and a more complicated calculation. Some types of raw inputs are automatically generated. Other types of raw inputs are manually entered. All raw inputs are subject to possible manual retroactive adjustments by competition officials. Primary requirements: The scoring system UI for competitors and other competition followers will show current and historical total team scores, team standings, team scores by scoring element, raw input data (at several levels of aggregation, e.g. daily, weekly, etc.), and other metrics. There will be charts, tables, and other widgets for displaying historical raw data inputs and scores. There will be a quasi-real-time dashboard that will show current scores and raw data inputs. Aggregate scores should be updated/refreshed whenever new raw data inputs arrive or existing raw data inputs are adjusted. There will be a "scorekeeper UI" for manually entering new inputs, manually adjusting existing inputs, and manually adjusting calculated scores. Decisions: Should the scoring calculations be performed on the database layer (T-SQL/SQL Server, in my case) or on the application layer (C#/ASP.NET MVC, in my case)? What are some recommended approaches for calculating updated total team scores whenever new raw inputs arrives? Calculating each of the teams' total scores from scratch every time a new input arrives will probably slow the system to a crawl. I've considered some kind of "diff" approach, but that approach may pose problems for ad-hoc queries and some aggegates. I'm trying draw some sports analogies, but it's tough because most games consist of no more than 20 or 30 scoring elements per game (I'm thinking of a high-scoring baseball game; football and soccer have fewer scoring events per game). Perhaps a financial balance sheet analogy makes more sense because financial "bottom line" calcs may be calculated from 250,000 or more transactions. Should I be making heavy use of caching for this application? Are there any obvious approaches or similar case studies that I may be overlooking?

    Read the article

  • Talking JavaOne with Rock Star Adam Bien

    - by Janice J. Heiss
    Among the most celebrated developers in recent years, especially in the domain of Java EE and JavaFX, is consultant Adam Bien, who, in addition to being a JavaOne Rock Star for Java EE sessions given in 2009 and 20011, is a Java Champion, the winner of Oracle Magazine’s 2011 Top Java Developer of the Year Award, and recently won a 2012 JAX Innovation Award as a top Java Ambassador. Bien will be presenting the following sessions: TUT3907 - Java EE 6/7: The Lean Parts CON3906 - Stress-Testing Java EE 6 Applications Without Stress CON3908 - Building Serious JavaFX 2 Applications CON3896 - Interactive Onstage Java EE Overengineering I spoke with Bien to get his take on Java today. He expressed excitement that the smallest companies and startups are showing increasing interest in Java EE. “This is a very good sign,” said Bien. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.” Nashorn is an upcoming JavaScript engine, developed fully in Java by Oracle, and based on the Da Vinci Machine (JSR 292) which is expected to be available for Java 8.   Bien expressed concern about a common misconception regarding Java's mediocre productivity. “The problem is not Java,” explained Bien, “but rather systems built with ancient patterns and approaches. Sometimes it really is ‘Cargo Cult Programming.’ Java SE/EE can be incredibly productive and lean without the unnecessary and hard-to-maintain bloat. The real problems are ‘Ivory Towers’ and not Java’s lack of productivity.” Bien remarked that if there is one thing he wanted Java developers to understand it is that, "Premature optimization is the root of all evil. Or at least of some evil. Modern JVMs and application servers are hard to optimize upfront. It is far easier to write simple code and measure the results continuously. Identify the hotspots first, then optimize.” He advised Java EE developers to, “Rethink everything you know about Enterprise Java. Before you implement anything, ask the question: ‘Why?’ If there is no clear answer -- just don't do it. Most well known best practices are outdated. Focus your efforts on the domain problem and not the technology.” Looking ahead, Bien said, “I would like to see open source application servers running directly on a hypervisor. Packaging the whole runtime in a single file would significantly simplify the deployment and operations.”Check out a recent Java Magazine interview with Bien about his Java EE 6 stress monitoring tool here. Originally published on blogs.oracle.com/javaone.

    Read the article

  • How to Add Control Panel to “My Computer” in Windows 7 or Vista

    - by The Geek
    Back in the Windows XP days, you could easily add Control Panel to My Computer with a simple checkbox in the folder view settings. Windows 7 and Vista don’t make this quite as easy, but there’s still a way to get it back. To make this tweak, we’ll be doing a quick registry hack, but there’s a downloadable version provided as well. Manual Registry Tweak to Add Control Panel Open up regedit.exe through the start menu search or run box, and then browse down to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace Now that you’re there, you’ll need to right-click and create a new key… If you want to add the regular Control Panel view, with the categories, you’ll need to use one GUID as the name of the key. If you want the icon view instead, you can use the other key. Here they are: Category View:  {26EE0668-A00A-44D7-9371-BEB064C98683} Icon View: {21EC2020-3AEA-1069-A2DD-08002B30309D} Once you’re done, it should look like this: Now over in the Computer view, just hit the F5 key to refresh the panel, and you should see the new icon pop up in the list: Now when you click on the icon you’ll be taken to Control Panel. If you didn’t know how to change the view before, you can use the drop-down box on the right-hand side to switch between Category and icon view. Downloadable Registry Hack Rather than deal with manual registry editing, you can simply download the file, extract it, and then either double-click on the AddCategoryControlPanel.reg to add the Category view icon, or AddIconControlPanel.reg to add the other icon. There’s an uninstall script provided for each. Download ControlPanelMyComputer Registry Hack from howtogeek.com Similar Articles Productive Geek Tips Disable User Account Control (UAC) the Easy Way on Win 7 or VistaHow To Figure Out Your PC’s Host Name From the Command PromptRestore Missing Desktop Icons in Windows 7 or VistaNew Vista Syntax for Opening Control Panel Items from the Command-lineAdd Registry Editor to Control Panel TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • How to Create a Realistic Timeline for your Projects

    - by Aditi
    Developing a Realistic project time line is a biggest and most challenging task of any team. We here at JustSkins, have learned over time that developing and adhering to a timeline isn’t easy but is not impossible. Keeping in consideration from any technical glitches to a human resource issue, unexpected complications can come up at any time during the entire project life cycle, How ever there are many things you can do in order to save the project from going off-track there. A specific timeline is very important statistic for time management planning and keeping your client informed of the progress. Have a rigid time tracking assures the client, that you are committed to achieving specific project milestones in time. The more you work on varied IT projects, the more you know about the aspects of project and you get to better develop future estimates and timelines. Make a Structure When estimating the time required to accomplish each task, consider which all team members will be involved, also assign the amount of time each individual must put in to the project. Define Scope & dependability and set deadlines for accomplishing them. Sometimes Working in Phases or modules help in doing more in lesser time. One must use a Project management tool in order to systematize the collaboration between the team members. Realistic Goal Setting One approach is to keep a bandwidth of few days to deal with delay, errors & incorrect coding issues you are likely to have in the course. It is very realistic to keep delivery date to client different then internal delivery timeline. If your resource is having hard time finishing this task in the time specified, keep some room to give him a day or two extra to accomplish his task. This does not upset client delivery and is the safe way of doing projects. Keep and Insightful Approach Identify potential problems before they delay your project. To be a great IT manager you have to be honest & diplomatic at the same time, it is essential for you to give earlier notice of potential delays or scope changes to your clients. In situation where delay is inevitable you should be in a position to provide immediate, on-demand status progress reports. Learning from past experiences if very important one must keep a track of actual time spent on all aspects of the projects, this will help you create better future estimates and timelines.

    Read the article

  • Speed Up the Help Dialog in Windows and Office

    - by Matthew Guay
    When you click help, you don’t want to wait for your computer to bring it to you.  Here’s how you can speed up the help dialog in Windows and Office. If you have a slow internet connection, chances are you’ve been frustrated by the Help dialog in Windows and Office trying to download fresh content every time you open them. This can be great if the updated help files contain better content, but sometimes you just want to find what you were looking for without waiting.  Here’s how you can turn off the automatic online help. Use Local Help in Windows Windows 7 and Vista’s help dialog usually tries to load the latest content from the net, but this can take a long time on slow connections. If you’re seeing the above screen a lot, you may want to switch to offline help.  Click the “Online Help” button at the bottom, and select “Get offline Help”. Now your computer will just load the pre-installed help files.  And don’t worry; if there’s a major update to your help files, Windows will download and install it through Windows Update.   Stupid Geek Tip: An easy way to open Windows Help is to click on your desktop or Start Menu and press F1 on your keyboard. Use Local Help in Office This same trick works in Office 2007 and 2010.  We’ve actually had more problems with Office’s help being tardy. Solve this the same way as with Windows help.  Click on the “Connected to Office.com” or “Connected to Office Online” button, depending on your version of Office, and select “Show content only from this computer”. This will automatically change the settings for Help in all of your Office applications. While this may not be a major trick, it can be helpful especially if you have a slow internet connection and want to get things done quickly.  Similar Articles Productive Geek Tips How to See the About Dialog and Version Information in Office 2007Speed Up SATA Hard Drives in Windows VistaMake Mouse Navigation Faster in WindowsSpeed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Oracle Optimized Solutions at Oracle OpenWorld 2012

    - by ferhatSF
    Have you registered for Oracle OpenWorld 2012 in San Francisco from September 30 to October 4? Visit the Oracle OpenWorld 2012 site today for registration and more information. Come join us to hear how Oracle Optimized Solutions can help you save money, reduce integration risks, and improve user productivity. Oracle Optimized Solutions are designed, pre-tested, tuned and fully documented architectures for optimal performance and availability. They provide written guidelines to help size, configure, purchase and deploy enterprise solutions that address common IT problems. Built with flexibility in mind, Oracle Optimized Solutions can be deployed as complete solutions or easily tailored to meet your specific needs - they are proven to save money, reduce integration risks and improve user productivity. Here is a preview of the planned Oracle OpenWorld sessions(*) on Oracle Optimized Solutions. October 1, 2012 Monday Time Session ID Title Location 12:15 PM CON7916 Accelerate Oracle E-Business Suite Deployment with SPARC SuperCluster Moscone West - 2001 03:15 PM GEN9691 General Session: Accelerate Your Business with the Oracle Hardware Advantage Moscone North - Hall D 04:45 PM CON4821 Building a Flexible Enterprise Cloud Infrastructure on Oracle SPARC Systems Moscone West - 2001 October 2, 2012 Tuesday Time Session ID Title Location 10:15 AM CON4561 Backup-and-Recovery Best Practices with Oracle Engineered Systems Products Moscone South - 252 11:45 AM CON3851 Optimizing JD Edwards EnterpriseOne on SPARC T4 Servers for Best Performance Moscone West - 2000 01:15 PM GEN11472 General Session: Breakthrough Efficiency in Private Cloud Infrastructure Moscone West - 3014 01:15 PM CON4600 Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization Moscone South - 252 05:00 PM CON9465 Next-Generation Directory: Oracle Unified Directory Moscone West - 3008 05:00 PM CON4088 Accelerate Your SAP Landscape with the Oracle SPARC SuperCluster Moscone West - 2001 05:00 PM CON7743 High-Performance Security for Oracle Applications Using SPARC T4 Systems Moscone West - 2000 05:00 PM CON3857 Archive Strategies for 100 Percent Data Availability Moscone South - 270 October 3, 2012 Wednesday Time Session ID Title Location 10:15 AM CON6528 Configure Oracle Hybrid Columnar Compression to Optimize Query Database Performance up to 10x Moscone South - 252 11:45 AM CON2590 Breakthrough in Private Cloud Management on SPARC T-Series Servers Moscone South - 270 01:15 PM CON4289 Oracle Optimized Solution for Siebel CRM at ACCOR Moscone West - 2000 05:00 PM CON7570 Improve PeopleSoft HCM Performance and Reliability with SPARC SuperCluster Moscone South - 252 * Schedule subject to change In addition, there will be Oracle Optimized Solutions Hands-On-Labs sessions planned. Please enroll ahead of time as space is limited: Oracle Optimized Solutions: Hands on Labs in Oracle OpenWorld Place: Marriott Marquis - Salon 14/15 Date and Time Session ID Title Monday October 1, 2012 01:45 PM HOL9868 Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c Monday October 1, 2012 03:15 PM HOL9907 Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Wednesday October 3, 2012 05:00 PM HOL9870 x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Thursday October 4, 2012 11:15 AM HOL9869 0 to Database Backup and Recovery in 60 Minutes Oracle Optimized Solutions executives and experts will also be at hand for discussions and follow ups. And don’t forget to catch live demonstrations of our complete Oracle Optimized Solutions while at Oracle OpenWorld 2012 in San Francisco. We recommend the use of the Schedule Builder tool to plan your visit to the conference and for pre-enrollment in sessions of your interest. We hope to see you there!

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >