Search Results

Search found 4079 results on 164 pages for 'parts'.

Page 15/164 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • How to stop Cron from sending messages about errors

    - by Beck
    I have this strange mails comming from cron: Return-Path: <[email protected]> Delivered-To: [email protected] Received: by domain.com (Postfix, from userid 0) id 6F944264D0; Mon, 10 Jan 2011 10:35:01 +0000 (UTC) From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@domain> lynx -dump http://www.domain.com/cron/realqueue Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <[email protected]> Date: Mon, 10 Jan 2011 10:35:01 +0000 (UTC) /bin/sh: lynx: not found I have this cron settings in crontab file: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin */5 * * * * lynx -dump http://www.domain.com/cron/realqueue 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) Lynx is installed on my Ubuntu as well. Ofc in place of domain.com is my domain, just replaced. Thanks ;)

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • New to building computers worried about temps

    - by dave
    I'm new to building my own computers and I was wondering about maximum temperatures. I understand that the room temp can affect the computers temp but how relevent is it? I understand that if my room temp is 20°C none of my computer parts could be lower than that. But if my room is 27°C instead of 20°C would this cause my computers parts to heat up more/faster? My new computer I built myself for gaming is i7 2600k 16gb ram ddr3 1600 hd6970 2 gb 240gb ssd ( bought a nas with 3 2tb drives in raid 5 for my home network ) 850w modular psu I also have my old hp computer i3 2120 8gb ram hd6770 1tb hdd I also have 3 laptops in my household, but I am not worried about their temps, they heat up my legs but they are never under stress. Due to size and money reasons I used an old case and it only has one of the sides left on it. Is this bad for the computer and will the extra dust cause problems? Or should I leave it this way or take the missus wrath and buy a case? If so is there any certain case I should get? I don't care about looks I just want card reader and usb slots and for it to run as cool or cooler than now, my case has 1 fan. Also what are the max temps for my new and old computer parts? Is 40°C under load ok for my CPU, what about 70°C for my GPU is that ok too, or should I worry? What are normal and safe temps for my components? I have looked around but there seem to be lots of different answers. I know that 100°C is bad but I want my parts to last as long as possible and this site always seems to give good replies without arguing or flaming.

    Read the article

  • Weekly cron executing 2 times:

    - by yes123
    Hi guys, I have placed a .sh file that runs a php script weekly. This script should run only once, but every sunday it runs at: 1:30 am 6:50 am Any way to fix this? Add1: /etc/cron.weekly/cronweek: #!/bin/bash /usr/bin/php -f /home/my/path/to/script/cronweek.php Add2: crontab file: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # */1 * * * * root /usr/local/rtm/bin/rtm 35 > /dev/null 2> /dev/null

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • SharePoint MOSS - Serve HTTP content on an HTTPS page without Mixed Content Warning?

    - by kcb263
    Our "portal-like" SharePoint site is served using HTTPS/SSL. So a user goes to https://web.company.com and sees content and different Web Parts. So far, no problem. The desire now is to have new Web Parts added that either frame HTTP content (such as Weather Bug) or HTTP RSS feeds. The issue that arises is that by doing this, results in a "Mixed Content" warning in the browser. Has anybody successfully been able to implement such a scenario, or one similar to it? The options we have looked at, unsuccessfully, have been: using Apache Reverse Proxy Server mirror an external site Custom Web Parts

    Read the article

  • Launching mysql server: same permissions for root and for user

    - by toinbis
    Hi folks, have been directed here from stackoverflow here, am reposting the question and adding my.cnf at the end of a post. so far in my 10+ years experience with linux, all the permission problems I've ever encountered, have been successfully solved with chmod -R 777 /path/where/the/problem/has/occured (every lie has a grain of truth in it :) This time the trick doesn't work, so I'm turning to you for help. I'm compiling mysql server from scratch with zc.buildout (www . buildout . org). I do launch it by executing /home/toinbis/.../parts/mysql/bin/mysqld_safe, this works. The thing is that i'll be launching this from within supervisor (supervisord . org) script, and when used on the deployment server, it'll need it to be launched with root permissions(so that nginx server, launched with the same script, would have access to 80 port). The problem is that sudo /home/toinbis/.../parts/mysql/bin/mysqld_safe, fails, generating the error, posted bellow, in mysql error log (apache and nginx works as expected). http://lists.mysql.com/mysql/216045 suggests, that "there are two errors: A missing table and a file system that mysqld doesn't have access to". Mysqldatadir and all the mysql server binary files has 777 permissions, talbe mysql.plugin does exist and has 777 permissions (why Can't open the mysql.plugin table?), "sudo touch mysql_datadir/tmp/file" does create file (why Can't create/write to file /home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz?). chgrp -R mysql mysql_datadir and adding "root, toinbis, mysql" users to mysql group ( cat /etc/group | grep mysql outputs mysql:x:124:root,toinbis,mysql) has no effect - when i launch it as a casual user, it starts, when as a root - it fails. Does mysql server, even started as root, tries to operate as other, let's say, 'mysql' user? but even in that case, adding mysql user to mysql group and making all the mysql_datadirs files belong to mysql group should make things work smoothly. I do know that it might be a better idea to simply to launch one the nginx as root and mysql - as just a user, but this error irritated me enough so to devote enough energy so not to only "make things work", but to also make things work exactly as i wanted it initially, so to have a proof of concept that it's possible. and this is the generated error: 091213 20:02:55 mysqld_safe Starting mysqld daemon with databases from /home/toinbis/.../runtime/mysql_datadir /home/toinbis/.../parts/mysql/libexec/mysqld: Table 'plugin' is read only 091213 20:02:55 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. /home/toinbis/.../parts/mysql/libexec/mysqld: Can't create/write to file '/home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz' (Errcode: 13) 091213 20:02:55 InnoDB: Error: unable to create temporary file; errno: 13 091213 20:02:55 [ERROR] Plugin 'InnoDB' init function returned error. 091213 20:02:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 091213 20:02:55 [ERROR] Can't start server : Bind on unix socket: Permission denied 091213 20:02:55 [ERROR] Do you already have another mysqld server running on socket: /home/toinbis/.../runtime/var/pids/mysql.sock ? 091213 20:02:55 [ERROR] Aborting 091213 20:02:55 [Note] /home/toinbis/.../parts/mysql/libexec/mysqld: Shutdown complete 091213 20:02:55 mysqld_safe mysqld from pid file /home/toinbis/.../runtime/var/pids/mysql.pid ended My my.cnf (the basedir and datadir(including tempdir) have chmod -R 777 permissions) : [client] socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 [mysqld_safe] socket = /home/toinbis/.../runtime/var/pids/mysql.sock nice = 0 [mysqld] # # * Basic Settings # socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 pid-file = /home/toinbis/.../runtime/var/pids/mysql.pid basedir = /home/toinbis/.../parts/mysql datadir = /home/toinbis/.../runtime/mysql_datadir tmpdir = /home/toinbis/.../runtime/mysql_datadir/tmp skip-external-locking bind-address = 127.0.0.1 log-error =/home/toinbis/.../runtime/logs/mysql_errorlog # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 32M thread_stack = 128K thread_cache_size = 8 myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /home/toinbis/.../runtime/logs/mysql_logs/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /home/toinbis/.../runtime/logs/mysql_logs/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 #log_bin = /home/toinbis/.../runtime/mysql_datadir/mysql-bin.log #binlog_format = ROW #read_only = 0 #expire_logs_days = 10 #max_binlog_size = 100M #sync_binlog = 1 #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=64M innodb_log_file_size=16M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table innodb_locks_unsafe_for_binlog=1 [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completion [isamchk] key_buffer = 16M Any ideas much appreciated! regards, to P.S. sorry for messy hyperlinks, it's my first post and anti-spam feature of SF doesn't allow to post them properly :)

    Read the article

  • Install-SPSolution : This solution contains no resources scoped for a Web appli cation and cannot be deployed to a particular Web application

    - by Josh
    I have a PowerShell script that deploys about 12 web parts. They have all been created through Visual Studio 2010 and are being deployed to SharePoint 2010. I am getting the following error when running Install-SPSolution for one of my web parts: Install-SPSolution : This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application. Can someone help me debug this? Every other Install-SPSolution command uses -AllWebApplications, and I do not want to specify the web application directly using -URL. Here is the command that is breaking (this is the same command used to successfully deploy all 11 other web parts): Install-SPSolution –Identity PortalSelector.wsp -AllWebApplications -GACDeployment

    Read the article

  • SharePoint MOSS - Serve HTTP content on an HTTPS page without Mixed Content Warning?

    - by kcb263
    Our "portal-like" SharePoint site is served using HTTPS/SSL. So a user goes to https://web.company.com and sees content and different Web Parts. So far, no problem. The desire now is to have new Web Parts added that either frame HTTP content (such as Weather Bug) or HTTP RSS feeds. The issue that arises is that by doing this, results in a "Mixed Content" warning in the browser. Has anybody successfully been able to implement such a scenario, or one similar to it? The options we have looked at, unsuccessfully, have been: using Apache Reverse Proxy Server mirror an external site Custom Web Parts

    Read the article

  • Android “open for embedded”? Must-read Ars Technica article

    - by terrencebarr
    A few days ago ars technica published an article “Google’s iron grip on Android: Controlling open source by any means necessary”. If you are considering Android for embedded this article is a must-read to understand the severe ramifications of Google’s tight (and tightening) control on the Android technology and ecosystem. Some quotes from the ars technica article: “Android is open – except for all the good parts“ “Android actually falls into two categories: the open parts from the Android Open Source Project (AOSP) … and the closed source parts, which are all the Google-branded apps” “Android open source apps … turn into abandonware by moving all continuing development to a closed source model.” “Joining the OHA requires a company to sign its life away and promise to not build a device that runs a competing Android fork.” “Google Play Services is a closed source app owned by Google … to turn the “Android App Ecosystem” into the “Google Play Ecosystem” “You’re allowed to contribute to Android and allowed to use it for little hobbies, but in nearly every area, the deck is stacked against anyone trying to use Android without Google’s blessing“ Compare this with a recent Wired article “Oracle Makes Java More Relevant Than Ever”: “Oracle has actually opened up Java even more — getting rid of some of the closed-door machinations that used to be part of the Java standards-making process. Java has been raked over the coals for security problems over the past few years, but Oracle has kept regular updates coming. And it’s working on a major upgrade to Java, due early next year.” Cheers, – Terrence Filed under: Embedded, Mobile & Embedded Tagged: Android, embedded, Java Embedded, Open Source

    Read the article

  • Emacs: methods for debugging python

    - by Tom Willis
    I use emacs for all my code edit needs. Typically, I will use M-x compile to run my test runner which I would say gets me about 70% of what I need to do to keep the code on track however lately I've been wondering how it might be possible to use M-x pdb on occasions where it would be nice to hit a breakpoint and inspect things. In my googling I've found some things that suggest that this is useful/possible. However I have not managed to get it working in a way that I fully understand. I don't know if it's the combination of buildout + appengine that might be making it more difficult but when I try to do something like M-x pdb Run pdb (like this): /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ Where .../bin/python is the interpreter buildout makes with the path set for all the eggs. ~/bin/pdb is a simple script to call into pdb.main using the current python interpreter HellooKitty:hydrant twillis$ cat ~/bin/pdb #! /usr/bin/env python if __name__ == "__main__": import sys sys.version_info import pdb pdb.main() HellooKitty:hydrant twillis$ .../bin/devappserver is the dev_appserver script that the buildout recipe makes for gae project and .../parts/hydrant-app is the path to the app.yaml I am first presented with a prompt Current directory is /Users/twillis/bin/ C-c C-f Nothing happens but HellooKitty:hydrant twillis$ ps aux | grep pdb twillis 469 100.0 1.6 168488 67188 s002 Rs+ 1:03PM 0:52.19 /usr/local/bin/python2.5 /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ twillis 477 0.0 0.0 2435120 420 s000 R+ 1:05PM 0:00.00 grep pdb HellooKitty:hydrant twillis$ something is happening C-x [space] will report that a breakpoint has been set. But I can't manage to get get things going. It feels like I am missing something obvious here. Am I? So, is interactive debugging in emacs worthwhile? is interactive debugging a google appengine app possible? Any suggestions on how I might get this working?

    Read the article

  • Latest News on Service, Field Service and Depot Repair Products

    - by LuciaC
    Service and Depot Repair Customer Advisory Boards (CAB) In November 2012 the Service and Depot Repair CAB joined together for a combined meeting at Oracle HQ in Redwood Shores, California to discuss all the latest news in the Oracle Service, Field Service and Depot Repair products.  Over four days attendees shared their experiences with implementing and using these EBS CRM products and heard details of recent enhancements and future product plans direct from Development. You can access all the Oracle presentations via Doc ID 1511768.1.  Here are just some of the highlights: Field Service: Next Generation Dispatch Center Endeca Integration Case Study: Oracle Sun Field Service implementation. Mobile Field Service: New capabilities for technician-facing applications Service: Integration with Oracle Projects New Teleservice enhancements Spares Management: Supplier Warranty External Repair Execution Oracle Knowledge (Inquira) Introduction for Service Organizations If you weren't at the CAB, take a look at these presentations for great information about what's new and what's coming up in these products. 12.1.3++ Features for Field Service, Mobile Field Service, Spares Management, FSTP & Advanced Scheduler In June 2012 the R12.1.3++ patches were released for Field Service, Mobile Field Service, FSTP and Advanced Scheduler.  These patches contain new and updated functionality for these CRM Service suite modules.  New functionality includes: Field Service/FSTP/MFS: Support for Transfer Parts across subinventories in different organizations Validation to ensure Installed Item matches Returned Item MFS Wireless - Support fro Special Address Creation MFS Wireless - Enhanced Debrief Flow Advanced Scheduler Scheduler UI - Display of Spares Sourcing Information Auto Commit (Release) Tasks by Territory Dispatch Center UI - Display Spare Parts Arrival Information Spares Management Enhancements to the Task Reassignment Process Enhancements to the Parts Requirements UI Supply Chain Enhancements to allow filtering of ship methods from source location by distance. Check the following notes for more details and relevant patch numbers:Doc ID 1463333.1 - Oracle Field Service Release Notes, Release 12.1.3++Doc ID 1452470.1 - Field Service Technician Portal 12.1.3++ New FeaturesDoc ID 1463066.1 - Oracle Advanced Scheduler Release Notes, Release 12.1.3++ Doc ID 1463335.1 - Oracle Spares Management Release Notes, Release 12.1.3++ Doc ID 1463243.1 - Oracle Mobile Field Service Release Notes, Release 12.1.3++

    Read the article

  • rkhunter: right way to handle warnings further?

    - by zuba
    I googled some and checked out two first links it found: http://www.skullbox.net/rkhunter.php http://www.techerator.com/2011/07/how-to-detect-rootkits-in-linux-with-rkhunter/ They don't mention what shall I do in case of such warnings: Warning: The command '/bin/which' has been replaced by a script: /bin/which: POSIX shell script text executable Warning: The command '/usr/sbin/adduser' has been replaced by a script: /usr/sbin/adduser: a /usr/bin/perl script text executable Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne-Again shell script text executable Warning: The file properties have changed: File: /usr/bin/lynx Current hash: 95e81c36428c9d955e8915a7b551b1ffed2c3f28 Stored hash : a46af7e4154a96d926a0f32790181eabf02c60a4 Q1: Is there more extended HowTos which explain how to deal with different kind warnings? And the second question. Were my actions sufficient to resolve these warnings? a) To find the package which contains the suspicious file, e.g. it is debianutils for the file /bin/which ~ > dpkg -S /bin/which debianutils: /bin/which b) To check the debianutils package checksums: ~ > debsums debianutils /bin/run-parts OK /bin/tempfile OK /bin/which OK /sbin/installkernel OK /usr/bin/savelog OK /usr/sbin/add-shell OK /usr/sbin/remove-shell OK /usr/share/man/man1/which.1.gz OK /usr/share/man/man1/tempfile.1.gz OK /usr/share/man/man8/savelog.8.gz OK /usr/share/man/man8/add-shell.8.gz OK /usr/share/man/man8/remove-shell.8.gz OK /usr/share/man/man8/run-parts.8.gz OK /usr/share/man/man8/installkernel.8.gz OK /usr/share/man/fr/man1/which.1.gz OK /usr/share/man/fr/man1/tempfile.1.gz OK /usr/share/man/fr/man8/remove-shell.8.gz OK /usr/share/man/fr/man8/run-parts.8.gz OK /usr/share/man/fr/man8/savelog.8.gz OK /usr/share/man/fr/man8/add-shell.8.gz OK /usr/share/man/fr/man8/installkernel.8.gz OK /usr/share/doc/debianutils/copyright OK /usr/share/doc/debianutils/changelog.gz OK /usr/share/doc/debianutils/README.shells.gz OK /usr/share/debianutils/shells OK c) To relax about /bin/which as I see OK /bin/which OK d) To put the file /bin/which to /etc/rkhunter.conf as SCRIPTWHITELIST="/bin/which" e) For warnings as for the file /usr/bin/lynx I update checksum with rkhunter --propupd /usr/bin/lynx.cur Q2: Do I resolve such warnings right way?

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • Can't update kernel to 2.6.35.27

    - by Uri Herrera
    When I try to update I get this message, I'm guessing I'm missing something here? Filesystem Type Size Used Avail Use% Mounted on /dev/sdb6 ext4 43G 7.7G 33G 20% / none devtmpfs 1.6G 349k 1.6G 1% /dev none tmpfs 1.6G 5.9M 1.6G 1% /dev/shm none tmpfs 1.6G 218k 1.6G 1% /var/run none tmpfs 1.6G 0 1.6G 0% /var/lock /dev/sdb2 fuseblk 258G 198G 60G 77% /media/Backup /dev/sda1 fuseblk 321G 175G 146G 55% /media/Media /dev/sdb1 ext4 96M 84M 6.7M 93% /boot /dev/sdb7 ext4 175G 81G 86G 49% /home Here's the output: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.35-22-generic 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 5 not fully installed or removed. After this operation, 107MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 282211 files and directories currently installed.) Removing linux-image-2.6.35-22-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.35-22-generic /boot/vmlinuz-2.6.35-22-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.35-22-generic /boot/vmlinuz-2.6.35-22-generic /etc/default/grub: 23: Syntax error: newline unexpected run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.35-22- generic.postrm line 328. dpkg: error processing linux-image-2.6.35-22-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.35-22-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Internet of Things (IoT) Thanksgiving Special: Turkey Tweeter (Part 1)

    - by hinkmond
    It's time for the Internet of Things (ioT) Thanksgiving Special. This time we are going to work on a special Do-It-Yourself project to create an Internet of Things temperature probe to connect your Turkey Day turkey to the Internet by writing a Thanksgiving Day Java Embedded app for your Raspberry Pi which will send out tweets as it cooks in your oven. If you're vegetarian, don't worry, you can follow along and just run the simulation of the Turkey Tweeter, or better yet, try a tofu version of the Turkey Tweeter. Here is the parts list: 1 Vernier Go!Temp USB Temperature Probe 1 Uncooked Turkey 1 Raspberry Pi (not Pumpkin Pie) 1 Roll thermal reflective tape You can buy the Vernier Go!Temp USB Temperature Probe for $39 from here: http://www.vernier.com/products/sensors/temperature-sensors/go-temp/. And, you can get the thermal reflective tape from any auto parts store. (Don't tell them what you need it for. Say it's for rebuilding your V-8 engine in your Dodge Hemi. Avoids the need for a long explanation and sounds cooler...) The uncooked turkey can be found in your neighborhood grocery store. But, if you're making a vegetarian Tofurkey, you're on your own... The Java Embedded app will be the same, though (Java is vegan). So, grab all your parts and come back here for the next part of this project... Hinkmond

    Read the article

  • Error while reomving the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug.

    Read the article

  • Cron prepending filename to script output

    - by Caitifty
    I'm having an issue with unwanted lines being added to files output by a cron job. I have a script in /etc/cron.hourly which selects some data from a mysql database and saves it in a text file in /var/www. When I run the script as root, it does exactly what I expect it to do. When the script is executed by cron, it creates the same file, but prepends the following three lines at the top of the output file: :::::::::::::: /var/www/outputfilename :::::::::::::: I can't for the life of me work out how to stop this unwanted behavior. The line in /etc/crontab for cron.hourly is the default "44 * * * * root cd / && run-parts --report /etc/cron.hourly". If I use su to change to being root and do "cd / && run-parts --report /etc/cron.hourly" the script runs as expected and the output doesn't have the mysterious additional 3 lines. I've also tried removing the --report flag from the run-parts command in case that was somehow connected, but no joy. Finally, perusing the cron log output in /var/log/syslog just says cron.hourly ran without giving any additional information. Any suggestions on solving this weird problem most welcome..

    Read the article

  • Does it make sense to write tests for legacy code when there is no time for a complete refactoring?

    - by is4
    I usually try to follow the advice of the book Working Effectively with Legacy Code. I break dependencies, move parts of the code to @VisibleForTesting public static methods and to new classes to make the code (or at least some part of it) testable. And I write tests to make sure that I don't break anything when I'm modifying or adding new functions. A colleague says that I shouldn't do this. His reasoning: The original code might not work properly in the first place. And writing tests for it makes future fixes and modifications harder since devs have to understand and modify the tests too. If it's GUI code with some logic (~12 lines, 2-3 if/else block, for example), a test isn't worth the trouble since the code is too trivial to begin with. Similar bad patterns could exist in other parts of the codebase, too (which I haven't seen yet, I'm rather new); it will be easier to clean them all up in one big refactoring. Extracting out logic could undermine this future possibility. Should I avoid extracting out testable parts and writing tests if we don't have time for complete refactoring? Is there any disadvantage to this that I should consider?

    Read the article

  • Software architecture map to aid cross team communication?

    - by locster
    I work in a company where multiple teams each work on different parts of a software product in a vaguely agile/scrum manner. Mostly the organisation works well but there have been instances where a team may make a change without realising its impact on other teams. Where dependence is known communication has been good, and where dependence is suspected then 'broadcast' emails and informal conversations have also worked well. But there exists a sub-set of tasks that fall between the cracks. Broadcast emails are likely not the solution as they would become too numerous such that the email signal/noise ratio would fall. I'm contemplating a solution that involves a sort of map of the software, which details all of the various parts of the system and loosely tries to place interacting and dependent parts near to each other. Each developer then updates their position on the map (today I'm working on X and Y), and therefore if two or more developers happen to be co-located (or proximate) on the map then we can see this each day and this could form the trigger for further discussion on possible overlap and conflict. Is such a method out there and in use? If so what is it and does it work? Otherwise, do you think such a scheme has merit?

    Read the article

  • How to Edit PDFs?

    - by snowguy
    I typically have two needs: Scenario A. Change a single PDF page. In this case I have a PDF but not the original source file used to create the PDF. I don't want to try to recreate the document from scratch. I'd like to open the PDF and change a few things. A good example of this scenario: I was responsible for planning a big event at a campground site, I had a PDF of the site. I wanted to start with that document, highlight some parts, add some labels, remove some parts that weren't relevant. or Scenario B. Combine PDFs or extract information from a PDF This scenario usually arises because I want a single PDF deliverable that is made up of parts that are best created in different programs. In this case I have the source files for all the documents but they don't play well enough together to easily create a single PDF deliverable. For part of it, I may want to use Libre Office Writer. For another page I may want to use Gimp. Still another page I may use Libre Office Calc. I could use Writer as the master document and embed images or the Calc object into that, but for ultimate control, you can't beat separate PDF documents that are then combined. What are the best tools / processes for editing PDFs in Ubuntu?

    Read the article

  • Complex, yet simple crafting system model

    - by KatShot
    I'm working on some arcade shooter/slasher, and the main logline is "Kick'em with everything you want". There's not so many enemies in GDD, main focus is on tons of weapons and gadgets to cause mayhem. To get weapon, you need to craft it, and now crafting system looks simple, like: 1) You got three slots for weapon parts (like A, B, C) 2) You collect misc weapon parts, and when you got atleast one for every slot, you can craft a weapon (for example, if you got A1, B1, B2, B3 and C1, you can craft such models - A1B1C1, A1B2C1, A1B3C1) As for me, this crafting system is too simple, because weapon parts will just fall from the top of screen, often enough. That's why I'm thinking about adding some more crafting system levels, like resources (collect 10 scrap pieces to make part A1 or C3), etc. My question is: How can i add some more complex, still simple, transparent levels in crafting system? upd. For example, in Minecraft or Terraria, first 5-10 crafting recipies quite transparent and simple IMHO. But then it turns into huge mess to understand, how to craft this or that (for example, fishing rod)

    Read the article

  • How to interpolate hue values in HSV colour space?

    - by nick
    Hi, I'm trying to interpolate between two colours in HSV colour space to produce a smooth colour gradient. I'm using a linear interpolation, eg: h = (1 - p) * h1 + p * h2 s = (1 - p) * s1 + p * s2 v = (1 - p) * v1 + p * v2 (where p is the percentage, and h1, h2, s1, s2, v1, v2 are the hue, saturation and value components of the two colours) This produces a good result for s and v but not for h. As the hue component is an angle, the calculation needs to work out the shortest distance between h1 and h2 and then do the interpolation in the right direction (either clockwise or anti-clockwise). What formula or algorithm should I use? EDIT: By following Jack's suggestions I modified my JavaScript gradient function and it works well. For anyone interested, here's what I ended up with: // create gradient from yellow to red to black with 100 steps var gradient = hsbGradient(100, [{h:0.14, s:0.5, b:1}, {h:0, s:1, b:1}, {h:0, s:1, b:0}]); function hsbGradient(steps, colours) { var parts = colours.length - 1; var gradient = new Array(steps); var gradientIndex = 0; var partSteps = Math.floor(steps / parts); var remainder = steps - (partSteps * parts); for (var col = 0; col < parts; col++) { // get colours var c1 = colours[col], c2 = colours[col + 1]; // determine clockwise and counter-clockwise distance between hues var distCCW = (c1.h >= c2.h) ? c1.h - c2.h : 1 + c1.h - c2.h; distCW = (c1.h >= c2.h) ? 1 + c2.h - c1.h : c2.h - c1.h; // ensure we get the right number of steps by adding remainder to final part if (col == parts - 1) partSteps += remainder; // make gradient for this part for (var step = 0; step < partSteps; step ++) { var p = step / partSteps; // interpolate h var h = (distCW <= distCCW) ? c1.h + (distCW * p) : c1.h - (distCCW * p); if (h < 0) h = 1 + h; if (h > 1) h = h - 1; // interpolate s, b var s = (1 - p) * c1.s + p * c2.s; var b = (1 - p) * c1.b + p * c2.b; // add to gradient array gradient[gradientIndex] = {h:h, s:s, b:b}; gradientIndex ++; } } return gradient; }

    Read the article

  • Can't iterate over nestled dict in django

    - by fredrik
    Hi, Im trying to iterate over a nestled dict list. The first level works fine. But the second level is treated like a string not dict. In my template I have this: {% for product in Products %} <li> <p>{{ product }}</p> {% for partType in product.parts %} <p>{{ partType }}</p> {% for part in partType %} <p>{{ part }}</p> {% endfor %} {% endfor %} </li> {% endfor %} It's the {{ part }} that just list 1 char at the time based on partType. And it seams that it's treated like a string. I can however via dot notation reach all dict but not with a for loop. The current output looks like this: Color C o l o r Style S ..... The Products object looks like this in the log: [{'product': <models.Products.Product object at 0x1076ac9d0>, 'parts': {u'Color': {'default': u'Red', 'optional': [u'Red', u'Blue']}, u'Style': {'default': u'Nice', 'optional': [u'Nice']}, u'Size': {'default': u'8', 'optional': [u'8', u'8.5']}}}] What I trying to do is to pair together a dict/list for a product from a number of different SQL queries. The web handler looks like this: typeData = Products.ProductPartTypes.all() productData = Products.Product.all() langCode = 'en' productList = [] for product in productData: typeDict = {} productDict = {} for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } productDict['product'] = product productDict['parts'] = typeDict defaultPartsData = Products.ProductParts.gql('WHERE __key__ IN :key', key = product.defaultParts) optionalPartsData = Products.ProductParts.gql('WHERE __key__ IN :key', key = product.optionalParts) for defaultPart in defaultPartsData: label = Products.ProductPartLabels.gql('WHERE __key__ IN :key AND partLangCode = :langCode', key = defaultPart.partLabelList, langCode = langCode).get() productDict['parts'][defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in optionalPartsData: label = Products.ProductPartLabels.gql('WHERE __key__ IN :key AND partLangCode = :langCode', key = optionalPart.partLabelList, langCode = langCode).get() productDict['parts'][optionalPart.type.typeId]['optional'].append(label.partLangLabel) productList.append(productDict) logging.info(productList) templateData = { 'Languages' : Settings.Languges.all().order('langCode'), 'ProductPartTypes' : typeData, 'Products' : productList } I've tried making the dict in a number of different ways. Like first making a list, then a dict, used tulpes anything I could think of. Any help is welcome! Bouns: If someone have an other approach to the SQL quires, that is more then welcome. I feel that it kinda stupid to run that amount of quires. What is happening that each product part has a different label base on langCode. ..fredrik

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >