Search Results

Search found 7338 results on 294 pages for 'broken packages'.

Page 102/294 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Why are static imports of static methods with same names legal?

    - by user1055638
    Lets say we have these packages and classes: package p1; public class A1 { public static void a() {} } package p2; public class A1 { public static void a() {} } package p3; import static p1.A1.a; import static p2.A1.a; public class A1 { public static void test() { } } I am wondering, why the static import of methods is legal (won't result in compile time error) in package p3? We won't be able to use them further in the test() method as such usage will result in the compile time error. Why it is not the same as with a normal import of classes. Lets say we would like to import classes A1 from packages p1 and p2 into p3: package p3; import p1.A1; import p2.A1; such import is illegal and will result in the compile time error.

    Read the article

  • Python in AWS Elastic Beasntalk: Private package dependencies

    - by Adam Matan
    I would like to deploy a Python Flask application on beanstalk. The application depends on external packages (e.g. geopy) and internal packages (e.g. adam_geography). The manual Create a requirements.txt file and place it in the top-level directory of your source bundle. This would probably fetch geopy and its dependencies, but would not fetch adam_geography which is available from a custom repo inside my VPC. How do I specify/upload private, internal Python package dependencies in a Beanstalk application?

    Read the article

  • PL/SQL exception and Java programs

    - by edwards
    Hi Business logic is coded in pl/sql packages procedures and functions. Java programs call pl/sql packages procedures and functions to do database work. pl/sql programs store exceptions into Oracle tables whenever an exception is raised. How would my java programs get the exceptions since the exception instead of being propagated from pl/sql to java is getting persisted to a oracle table and the procs/functions just return 1 or 0. Sorry folks i should have added this constraint much earlier and avoided this confusion. As with many legacy projects we don't have the freedom to modify the stored procedures.

    Read the article

  • What is __path__ useful for?

    - by Jason Baker
    I had never noticed the __path__ attribute that gets defined on some of my packages before today. According to the documentation: Packages support one more special attribute, __path__. This is initialized to be a list containing the name of the directory holding the package’s __init__.py before the code in that file is executed. This variable can be modified; doing so affects future searches for modules and subpackages contained in the package. While this feature is not often needed, it can be used to extend the set of modules found in a package. Could somebody explain to me what exactly this means and why I would ever want to use it?

    Read the article

  • MVC route with id and sub-action

    - by Dan Revell
    I can't figure out what I need to do with MVC routing to make this work Here's my one route: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "{controller}/{id}/{action}", defaults: new { id = RouteParameter.Optional, action = RouteParameter.Optional } ); The request /Shipments/ works great. The request /Shipments/3/Packages works great. The request /Shipments/3 however fails with the error: Multiple actions were found that match the request: System.Linq.IQueryable`1[Api.Controllers.RequisitionsController+PackageRequisitionWithTracking] GetPackageRequisitions(Int32) on type Api.Controllers.RequisitionsController Api.Models.ShipmentRequisition GetShipmentRequisitions(Int32) on type Api.Controllers.RequisitionsController It can't seem to differentiate between: public ShipmentRequisition GetShipmentRequisitions(int id) and [ActionName("Packages")] public IQueryable<PackageRequisitionWithTracking> GetPackageRequisitions(int id) I would have thought the lack of action name on the get shipment by id would allow that route to work.

    Read the article

  • How to install mod_dav_svn on Centos 5.5

    - by Min2liz
    I'm trying to install mod_dav_svn on my server (Centos 5.5, Apache Apache 2.2.11, DirectAdmin 1.35.1 ), but no luck. I'm using this tutorial: http://wiki.centos.org/HowTos/Subversion#head-ae2d6fa671ad7ebd5d7835c6edbcd15dd2d73c4d So, I'm trying in console: # yum install mod_dav_svn subversion Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: centos.bst.lt * base: centos.bst.lt * extras: centos.bst.lt * updates: centos.bst.lt Excluding Packages in global exclude list Finished Setting up Install Process No package mod_dav_svn available. Package subversion-1.4.2-4.el5_3.1.i386 already installed and latest version Nothing to do Why can't it find mod_dav_svn? Also I tried: #yum search mod_dav_svn Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: centos.bst.lt * base: centos.bst.lt * extras: centos.bst.lt * updates: centos.bst.lt Excluding Packages in global exclude list Finished Warning: No matches found for: mod_dav_svn No Matches found Nothing...

    Read the article

  • dependson option does not work in knitr?

    - by umair durrani
    Data File Name of the data file = toto.rmd Contains following: ##1st ```{r clock, cache=TRUE} x <- 600 x ``` ##2nd ```{r, cache=TRUE, cache.path="toto_cache/", dependson="clock"} x+5 ``` Problem The second chunk is not being updated. Previously x was 500, after updating it to 600, I get following after knit HTML in RSTUDIO: 1st x <- 600 x ## [1] 600 2nd x+5 ## [1] 505 What am I missing here? Session Info > sessionInfo() R version 3.0.3 (2014-03-06) Platform: x86_64-w64-mingw32/x64 (64-bit) locale: [1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 [3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C [5] LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] ggplot2_0.9.3.1 loaded via a namespace (and not attached): [1] colorspace_1.2-3 dichromat_2.0-0 digest_0.6.4 evaluate_0.5.5 [5] formatR_0.10 grid_3.0.3 gtable_0.1.2 htmltools_0.2.4 [9] knitr_1.6 labeling_0.2 MASS_7.3-29 munsell_0.4.2 [13] plyr_1.8.1 proto_0.3-10 RColorBrewer_1.0-5 Rcpp_0.11.0 [17] reshape2_1.2.2 rmarkdown_0.2.64 scales_0.2.3 stringr_0.6.2 [21] tools_3.0.3 yaml_2.1.10 >

    Read the article

  • % confuses python raw sql query

    - by Jonathan
    Following this SO question, I'm trying to "truncate" all tables related to a certain django application using the following raw sql commands in python: cursor.execute("set foreign_key_checks = 0") cursor.execute("select concat('truncate table ',table_schema,'.',table_name,';') as sql_stmt from information_schema.tables where table_schema = 'my_db' and table_type = 'base table' AND table_name LIKE 'some_prefix%'") for sql in [sql[0] for sql in cursor.fetchall()]: cursor.execute(sql) cursor.execute("set foreign_key_checks = 1") Alas I receive the following error: C:\dev\my_project>my_script.py Traceback (most recent call last): File "C:\dev\my_project\my_script.py", line 295, in <module> cursor.execute(r"select concat('truncate table ',table_schema,'.',table_name,';') as sql_stmt from information_schema.tables where table_schema = 'my_db' and table_type = 'base table' AND table_name LIKE 'some_prefix%'") File "C:\Python26\lib\site-packages\django\db\backends\util.py", line 18, in execute sql = self.db.ops.last_executed_query(self.cursor, sql, params) File "C:\Python26\lib\site-packages\django\db\backends\__init__.py", line 216, in last_executed_query return smart_unicode(sql) % u_params TypeError: not enough arguments for format string Is the % in the LIKE making trouble? How can I workaround it?

    Read the article

  • Nuget and Source Control Files to Exclude?

    - by Peter Kellner
    I know I should be using nuget more but at this point I don't completely understand the nuances so I still tend to either get source and build the project, then reference the project, or I create my own "dlls" folder and hand copy the dll's in. As part of my learning process, I'm trying to understand what is critical and what is not when using nuget. For example, I've done install-package restsharp and now when I check into source control, I get files like "packages/RestSharp.103.4/lib/net4/RestSharp.xml". I'm assuming that nuget will help me with upgrading and such and it needs to have certain meta data type files. My question is: Should I be ignoring any or all files in the "packages" directory? If so, what and why. Thanks

    Read the article

  • How to build custom sqlite with provider under Android?

    - by dr4cul4
    Hi, I have a really strange problem. I need to build custom sqlite3 database engine under Android OS, but I also want to use database provider implementation. Unfortunately when examining sources of Android 1.6 I noticed that it's not so easy. Many classes including android.database.; packages use original provider, also many other parts of framework use android.database.sqlite.; packages directly, wich ofcourse make this abstraction a bit confusing and unnesesary. But going to my question. If there is any way that I could extend database interfaces to use custom implmentation of sqlite (or any other database)?

    Read the article

  • Save the output of a command in a string in linux using python

    - by user1657901
    I am using Fedora 17 xfce and I am programming in Python 2.7.3. Fedora uses a package manager called yum. I have a python script that searches for packages like this: import os package = raw_input("Enter package name to search: ") os.system("yum list " + package) So I want python to check if in the output of this command exists the words "No matching packages to list". I checked a similar question and I tried some methods [http://stackoverflow.com/questions/2502833/python-store-output-of-subprocess-call-in-a-string][1] but the string contained only the first line of the output. Thanks in advance

    Read the article

  • list all functions on CRAN

    - by baptiste
    Suppose I'm trying to run a script of unknown origin, and one of the functions is from a package that is not loaded by the script (an oversight, maybe it was loaded in the .Rprofile of the person who wrote it). How can I find in which package this function resides? There's some information compiled on CRAN, that doesn't require the user to download/install all R packages locally; however as far as I could tell it only gives access to the DESCRIPTION files. RSiteSearch, and its web equivalent, seem to access an online database of all CRAN packages, where presumably a list of all functions would be available. Is there some way of accessing this information? Thanks. Edit: I know sos::findFn, utils::RSiteSearch and search.r-project; what I would like is to get the raw data that these tools use.

    Read the article

  • PyQt4.QtCore doesn't contain many of its classes and attributes

    - by toc777
    Hi everyone, I have built PyQt4 from source and everything went smoothly until I tried to use some of the classes and attributes located in QtCore. For some reason QtCore is missing a lot of functionality and data that should be there. For example from PyQt4.QtCore import QT_VERSION_STR is an import error. There were no errors or warnings given when building the packages and I have also tried with the PyQt packages from yum but I have the same problem. Has anyone else encountered this problem before? Thanks.

    Read the article

  • Unable to setup postgres on ubuntu (8.4 on 9.10)

    - by shabda
    I am trying to setup postgres on ubuntu, and I cant proceed as I cant find the location of pg_hba.conf (Postgres 8.4 on ubuntu 9.10) What I did setup postgres via aptitude This gave me ... /usr/share/postgresql/8.4# aptitude install postgresql Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following NEW packages will be installed: libreadline5{a} postgresql postgresql-8.4{a} postgresql-client-8.4{a} postgresql-client-common{a} postgresql-common{a} 0 packages upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,159kB of archives. After unpacking 18.8MB will be used. Do you want to continue? [Y/n/?] Y Writing extended state information... Done Preconfiguring packages ... Selecting previously deselected package libreadline5. (Reading database ... 17490 files and directories currently installed.) Unpacking libreadline5 (from .../libreadline5_5.2-6_amd64.deb) ... Selecting previously deselected package postgresql-client-common. Unpacking postgresql-client-common (from .../postgresql-client-common_101_all.deb) ... Selecting previously deselected package postgresql-client-8.4. Unpacking postgresql-client-8.4 (from .../postgresql-client-8.4_8.4.1-1_amd64.deb) ... Selecting previously deselected package postgresql-common. Unpacking postgresql-common (from .../postgresql-common_101_all.deb) ... Selecting previously deselected package postgresql-8.4. Unpacking postgresql-8.4 (from .../postgresql-8.4_8.4.1-1_amd64.deb) ... Selecting previously deselected package postgresql. Unpacking postgresql (from .../postgresql_8.4.1-1_all.deb) ... Processing triggers for man-db ... Setting up libreadline5 (5.2-6) ... Setting up postgresql-client-common (101) ... Setting up postgresql-client-8.4 (8.4.1-1) ... update-alternatives: using /usr/share/postgresql/8.4/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode. Setting up postgresql-common (101) ... Setting up postgresql-8.4 (8.4.1-1) ... update-alternatives: using /usr/share/postgresql/8.4/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode. Setting up postgresql (8.4.1-1) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Writing extended state information... Done tried to login as su - postgres Followed by createdb mytestdb This failed with could not connect to database postgres: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? So I though I need to enable local connections in pg_hba.conf, which I cant find. So I created a new file as sudo vim /etc/postgresql/8.4/main/pg_hba.conf With values local all all ident But I still cant login after restarting the service. What are next steps for me to take.

    Read the article

  • Can someone help me install MYSQL server pelase? This is bugging me....

    - by Alex
    $ sudo aptitude install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following NEW packages will be installed: libhtml-template-perl{a} mysql-server mysql-server-5.0{a} mysql-server-core-5.0{a} 0 packages upgraded, 4 newly installed, 0 to remove and 12 not upgraded. Need to get 0B/27.7MB of archives. After unpacking 91.1MB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Preconfiguring packages ... Selecting previously deselected package mysql-server-core-5.0. (Reading database ... 17022 files and directories currently installed.) Unpacking mysql-server-core-5.0 (from .../mysql-server-core-5.0_5.1.30really5.0.75-0ubuntu10.3_amd64.deb) ... Selecting previously deselected package mysql-server-5.0. Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.1.30really5.0.75-0ubuntu10.3_amd64.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.30really5.0.75-0ubuntu10.3_all.deb) ... Setting up mysql-server-core-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... Setting up mysql-server-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... * Stopping MySQL database server mysqld [ OK ] /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up mysql-server-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... * Stopping MySQL database server mysqld [ OK ] /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Writing extended state information... Done Before I installed it, I ran this: sudo aptitude purge mysql-server mysql-server-5.0 This has happened before. I remember before, I did something with dpkg with fixed it.

    Read the article

  • How to install php-devel under CentOS 6.3 x64?

    - by Jeremy Dicaire
    I'm trying to install php-devel on my CentOS 6.3 VPS and get a failed dependencies test. From phpinfos(): SYSTEM Linux 2.6.32-279.5.2.el6.x86_64 #1 x86_64 NTS error: Failed dependencies: php(x86-64) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.x86_64 I've tried the following RPM packages: php54w-devel-5.4.6-1.w6.x86_64.rpm php-devel-5.4.6-1.el6.remi.i686.rpm php-devel-5.4.6-1.el6.remi.x86_64.rpm One of the above package gave me this: root@sv1 [/tmp]# rpm -Uvh php-devel-5.4.6-1.el6.remi.i686.rpm warning: php-devel-5.4.6-1.el6.remi.i686.rpm: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY error: Failed dependencies: php(x86-32) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.i686 libbz2.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 libcom_err.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libcrypto.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libedit.so.0 is needed by php-devel-5.4.6-1.el6.remi.i686 libgmp.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libgssapi_krb5.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libk5crypto.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libkrb5.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libncurses.so.5 is needed by php-devel-5.4.6-1.el6.remi.i686 libssl.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libstdc++.so.6 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.4.30) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.5.2) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.0) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.11) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.5) is needed by php-devel-5.4.6-1.el6.remi.i686 libz.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 I don't know how to fix this error and download all the dependencies. Thank you. Edit 1 (for quanta): Here is "yum repolist": root@sv1 [/tmp]# yum repolist Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: mirror.atlanticmetro.net * epel: mirror.cogentco.com * extras: mirror.atlanticmetro.net * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.choopa.net repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,099 root@sv1 [/tmp]# rpm -qa | grep php didn't return any result. I forgot to mention I'm using cPanel/WHM Edit 2 after adding the Remi repo: >root@sv1 [/etc/yum.repos.d]# yum clean all Loaded plugins: fastestmirror, presto Cleaning repos: base epel extras remi remi-test rpmforge updates Cleaning up Everything Cleaning up list of fastest mirrors 1 delta-package files removed, by presto >root@sv1 [/etc/yum.repos.d]# yum repolist Loaded plugins: fastestmirror, presto Determining fastest mirrors epel/metalink | 12 kB 00:00 * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net base | 3.7 kB 00:00 base/primary_db | 4.5 MB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 4.7 MB 00:00 extras | 3.0 kB 00:00 extras/primary_db | 6.3 kB 00:00 remi | 2.9 kB 00:00 remi/primary_db | 330 kB 00:00 remi-test | 2.9 kB 00:00 remi-test/primary_db | 85 kB 00:00 rpmforge | 1.9 kB 00:00 rpmforge/primary_db | 2.5 MB 00:00 updates | 3.5 kB 00:00 updates/primary_db | 2.3 MB 00:00 repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 remi Les RPM de remi pour Enterprise Linux 6 - x86_64 96+564 remi-test Les RPM de remi en test pour Enterprise Linux 6 - x86_64 25+139 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,220 >root@sv1 [/etc/yum.repos.d]# yum install php-devel Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net Setting up Install Process No package php-devel available. Error: Nothing to do >root@sv1 [/etc/yum.repos.d]#

    Read the article

  • unattended-upgrades does not reboot

    - by Cheiron
    I am running Debian 7 stable with unattended-upgrades (every morning at 6 AM) to make sure I am always fully updated. I have the following config: $ cat /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these origin patterns Unattended-Upgrade::Origins-Pattern { // Archive or Suite based matching: // Note that this will silently match a different release after // migration to the specified archive (e.g. testing becomes the // new stable). "o=Debian,a=stable"; "o=Debian,a=stable-updates"; // "o=Debian,a=proposed-updates"; "origin=Debian,archive=stable,label=Debian-Security"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // This option allows you to control if on a unclean dpkg exit // unattended-upgrades will automatically run // dpkg --force-confold --configure -a // The default is true, to ensure updates keep getting installed //Unattended-Upgrade::AutoFixInterruptedDpkg "false"; // Split the upgrade into the smallest possible chunks so that // they can be interrupted with SIGUSR1. This makes the upgrade // a bit slower but it has the benefit that shutdown while a upgrade // is running is possible (with a small delay) //Unattended-Upgrade::MinimalSteps "true"; // Install all unattended-upgrades when the machine is shuting down // instead of doing it in the background while the machine is running // This will (obviously) make shutdown slower //Unattended-Upgrade::InstallOnShutdown "true"; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. A package that provides // 'mailx' must be installed. E.g. "[email protected]" Unattended-Upgrade::Mail "root"; // Set this value to "true" to get emails only on errors. Default // is to always send a mail if Unattended-Upgrade::Mail is set Unattended-Upgrade::MailOnlyOnError "true"; // Do automatic removal of new unused dependencies after the upgrade // (equivalent to apt-get autoremove) //Unattended-Upgrade::Remove-Unused-Dependencies "false"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade Unattended-Upgrade::Automatic-Reboot "true"; // Use apt bandwidth limit feature, this example limits the download // speed to 70kb/sec //Acquire::http::Dl-Limit "70"; As you can see Automatic-Reboot is true and thus the server should automaticly reboot. Last time I checked the server was online for over 100 days, which means that the update from Debian 7.1 to Debian 7.2 has happened while the server was up (and indeed, all updates were installed), but this involves kernel updates, which means that the server should reboot. It did not. The server was running very slow, so I rebooted which fixed that. I did some research and found out that unattended-upgrades responds to the reboot-required file in /var/run/. I touched this file and waited one week, the file still exists and the server did not reboot. So I think that unattended-uppgrades ignores the auto-reboot part. So, am I doing somthing wrong here? Why did the server not restart? The upgrade part works perfect by the way, its just the reboot part that does not seem to work as it should.

    Read the article

  • Reinstall Postfix

    - by Kevin
    I tried to reinstall Postfix, but I get this bunch of errors: root@***:/etc/init.d# sudo apt-get install -f postfix Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: procmail postfix-mysql postfix-pgsql postfix-ldap postfix-pcre resolvconf postfix-cdb mail-reader The following NEW packages will be installed: postfix 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/1,389kB of archives. After this operation, 3,531kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package postfix. (Reading database ... 56122 files and directories currently installed.) Unpacking postfix (from .../postfix_2.7.1-1ubuntu0.1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for ufw ... Processing triggers for man-db ... Setting up postfix (2.7.1-1ubuntu0.1) ... Configuration file `/etc/init.d/postfix' ==> File on system created by you or by a script. ==> File also in package provided by package maintainer. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** postfix (Y/I/N/O/D/Z) [default=N] ? Y Installing new version of config file /etc/init.d/postfix ... Adding group `postfix' (GID 109) ... Done. Adding system user `postfix' (UID 106) ... Adding new user `postfix' (UID 106) with group `postfix' ... Not creating home directory `/var/spool/postfix'. Creating /etc/postfix/dynamicmaps.cf Adding tcp map entry to /etc/postfix/dynamicmaps.cf Adding group `postdrop' (GID 115) ... Done. setting myhostname: ***.net setting alias maps setting alias database setting myorigin setting destinations: ***.net, localhost.***.net, , localhost setting relayhost: setting mynetworks: 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 setting mailbox_size_limit: 0 setting recipient_delimiter: + setting inet_interfaces: all Postfix is now set up with a default configuration. If you need to make changes, edit /etc/postfix/main.cf (and others) as needed. To view Postfix configuration values, see postconf(1). After modifying main.cf, be sure to run '/etc/init.d/postfix reload'. Running newaliases postalias: fatal: /etc/mailname: cannot open file: Permission denied dpkg: error processing postfix (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: postfix E: Sub-process /usr/bin/dpkg returned an error code (1) I tried aptitude purge, remove, autoclean and all of dpkg options (configure, remove, purge) but nothing did the trick. /etc/mailname exists (0644 root:root) with as content *.net (fetched from hostname). What am I doing wrong?

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • What am I doing wrong with this use of StructLayout( LayoutKind.Explicit ) when calling a PInvoke st

    - by csharptest.net
    The following is a complete program. It works fine as long as you don't uncomment the '#define BROKEN' at the top. The break is due to a PInvoke failing to marshal a union correctly. The INPUT_RECORD structure in question has a number of substructures that might be used depending on the value in EventType. What I don't understand is that when I define only the single child structure of KEY_EVENT_RECORD it works with the explicit declaration at offset 4. But when I add the other structures at the same offset the structure's content get's totally hosed. //UNCOMMENT THIS LINE TO BREAK IT: //#define BROKEN using System; using System.Runtime.InteropServices; class ConIOBroken { static void Main() { int nRead = 0; IntPtr handle = GetStdHandle(-10 /*STD_INPUT_HANDLE*/); Console.Write("Press the letter: 'a': "); INPUT_RECORD record = new INPUT_RECORD(); do { ReadConsoleInputW(handle, ref record, 1, ref nRead); } while (record.EventType != 0x0001/*KEY_EVENT*/); Assert.AreEqual((short)0x0001, record.EventType); Assert.AreEqual(true, record.KeyEvent.bKeyDown); Assert.AreEqual(0x00000000, record.KeyEvent.dwControlKeyState & ~0x00000020);//strip num-lock and test Assert.AreEqual('a', record.KeyEvent.UnicodeChar); Assert.AreEqual((short)0x0001, record.KeyEvent.wRepeatCount); Assert.AreEqual((short)0x0041, record.KeyEvent.wVirtualKeyCode); Assert.AreEqual((short)0x001e, record.KeyEvent.wVirtualScanCode); } static class Assert { public static void AreEqual(object x, object y) { if (!x.Equals(y)) throw new ApplicationException(); } } [DllImport("Kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)] public static extern IntPtr GetStdHandle(int nStdHandle); [DllImport("Kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)] public static extern bool ReadConsoleInputW(IntPtr hConsoleInput, ref INPUT_RECORD lpBuffer, int nLength, ref int lpNumberOfEventsRead); [StructLayout(LayoutKind.Explicit)] public struct INPUT_RECORD { [FieldOffset(0)] public short EventType; //union { [FieldOffset(4)] public KEY_EVENT_RECORD KeyEvent; #if BROKEN [FieldOffset(4)] public MOUSE_EVENT_RECORD MouseEvent; [FieldOffset(4)] public WINDOW_BUFFER_SIZE_RECORD WindowBufferSizeEvent; [FieldOffset(4)] public MENU_EVENT_RECORD MenuEvent; [FieldOffset(4)] public FOCUS_EVENT_RECORD FocusEvent; //} #endif } [StructLayout(LayoutKind.Sequential)] public struct KEY_EVENT_RECORD { public bool bKeyDown; public short wRepeatCount; public short wVirtualKeyCode; public short wVirtualScanCode; public char UnicodeChar; public int dwControlKeyState; } [StructLayout(LayoutKind.Sequential)] public struct MOUSE_EVENT_RECORD { public COORD dwMousePosition; public int dwButtonState; public int dwControlKeyState; public int dwEventFlags; }; [StructLayout(LayoutKind.Sequential)] public struct WINDOW_BUFFER_SIZE_RECORD { public COORD dwSize; } [StructLayout(LayoutKind.Sequential)] public struct MENU_EVENT_RECORD { public int dwCommandId; } [StructLayout(LayoutKind.Sequential)] public struct FOCUS_EVENT_RECORD { public bool bSetFocus; } [StructLayout(LayoutKind.Sequential)] public struct COORD { public short X; public short Y; } } UPDATE: For those worried about the struct declarations themselves: bool is treated as a 32-bit value the reason for offset(4) on the data is to allow for the 32-bit structure alignment which prevents the union from beginning at offset 2. Again, my problem isn't making PInvoke work at all, it's trying to figure out why these additional structures (supposedly at the same offset) are fowling up the data by simply adding them.

    Read the article

  • SSIS - Access Denied with UNC paths - The file name is a device or contains invalid characters

    - by simonsabin
    I spent another day tearing my hair out yesterday trying to resolve an issue with SSIS packages runnning in SQLAgent (not got much left at the moment, maybe I should contact the SSIS team for a wig). My situation was that I am deploying packages to a development server, and to provide isolation I was running jobs with a proxy account that only had access to the development servers. Proxies are an awesome feature and mean that you should never have to "just run the job as sysadmin". The issue I was facing was that the job step was failing. The job step was a simple execution of the package.The following errors appeared in my log file. I always check the "Log step output in history" for a job step, this ensures you get all the output from the command that you run. I'll blog about this later. If looking at the output in sysdtslog90 then you will have an entry with datacode -1073573533 and error message File or directory "<filename>" represented by connection "<connection>" does not exist.  Not exactly helpful. If you get the output from the console then you will also get these errors. 0xC0202070 "The file name property is not valid. The file name is a device or contains invalid characters." 0xC001401E "specified in the connection was not valid." It appears this error is due to the use of a UNC path and the account runnnig the package not having access to all the folders in the path. Solution To solve this you need to ensure that the proxy account has access to ALL folders in the path you are accessing. To check this works, logon as the relevant proxy user, or run a command window as the specified user. Then try and do net use \\server\share and then do a dir for each folder in the path and check you have access. If these work and you still have the problem then you have some other problem, sorry. The following are posts on experts exchange that also discuss this,http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/Q_24056047.htmlhttp://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/Q_23968903.html This blog had a post about it being a 64 bit issue. That definitely wasn't the issue for me as I was on a 32 bit server http://blogs.perkinsconsulting.com/post/64-bit-SQL-Server-2005-SSIS-and-UNC-paths-Part-2.aspx  

    Read the article

  • SQL Server Editions and Integration Services

    The SQL Server 2005 and SQL Server 2008 product family has quite a few editions now, so what does this mean for SQL Server Integration Services? Starting from the bottom we have the free edition known as Express, and the entry level Workgroup edition, as well as the new Web edition. None of these three include the full SSIS product, but they do all include the SQL Server Import and Export Wizard, with access to basic data sources but nothing more, so for simple loading and extraction of data this should suffice. You will not be able to build packages though, this is just a one shot deal aimed at using the wizard on an ad-hoc basis. To get the full power of Integration Services you need to start with Standard edition. This includes the BI Development Studio, for building your own packages, and fully functional IDE integrated into Visual Studio. (You get the full VS 2005/2008 IDE with the product). All core functions will be available but with a restricted set of transformations and tasks. The SQL Server 2005 Features Comparison or Features Supported by the Editions of SQL Server 2008 describes standard edition as having basic transforms, compared to Enterprise which includes the advanced transforms. I think basic is a little harsh considering the power you get with Standard, but the advanced covers the truly ground-breaking capabilities of data mining, text mining and cleansing or fuzzy transforms. The power of performing these operations within your ETL pipeline should not be underestimated, but not all processes will require these capabilities, so it seems like a reasonable delineation. Thankfully there are no feature limitations or artificial governors within Standard compared to Enterprise. The same control flow and data flow engines underpin both editions, with the same configuration and deployment options allowing you to work seamlessly between environments and editions if using the common components. In fact there are no govenors at all in SSIS, so whilst the SQL Database engine is limited to 4 CPUs in Standard edition, SSIS is only limited by the base operating system. The advanced transforms only available with Enterprise edition: Data Mining Training Destination Data Mining Query Component Fuzzy Grouping Fuzzy Lookup Term Extraction Term Lookup Dimension Processing Destination Partition Processing Destination The advanced tasks only available with Enterprise edition: Data Mining Query Task So in summary, if you want SQL Server Integration Services, you need SQL Server Standard edition, and for the more advanced tasks and transforms you need SQL Server Enterprise edition. To recap, the answer to the often asked question is no, SQL Server Integration Services is not available in SQL Server Express or Workgroup editions.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >