Search Results

Search found 20426 results on 818 pages for 'service packs'.

Page 67/818 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • How to create systemd.service in Fedora 16 (x86_64)?

    - by marverix
    I have big problem with creating service in new way - by systemctl (systemd.service) in Fedora 16. I wonna to create very simple service for minidlna server. I have created new file called minidlna.service in /lib/systemd/system/ and here is how it's looks like: [Unit] Description=Mini DLNA [Service] Type=oneshot ExecStart=/usr/sbin/minidlna [Install] WantedBy=multi-user.target Unfortunately systemctl status minidlna.service prints: Loaded: loaded (/lib/systemd/system/minidlna.service; enabled) Active: inactive (dead) since Sat, 03 Dec 2011 20:49:23 +0100; 9s ago Main PID: 1580 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/minidlna.service Any ideas how to fix it? Cheers!

    Read the article

  • tftpd-hpa service must be restarted before working after fresh boot

    - by Steve
    I'm running Ubuntu 12.04 inside a VirtualBox VM. I've installed tftpd-hpa so I can boot an embedded Linux device via tftp. My problem is that after a fresh boot of the VM, tftpd doesn't seem to work until I restart the service, after which is works great until the system is rebooted. The transcript below should explain the situation. EDIT: After the fresh boot, I execute netstat -a | grep tftp and find nothing. After restarting the service, the same command returns udp 0 0 *:tftp *:* (whitespace removed). I think this might be the key to the problem, I'm just not sure how to resolve it. I don't think it's related to this specific issue, but I had another problem with tftpd that was asked and answered in this question. steve@steve-VirtualBox:~$ cat /etc/default/tftpd-hpa # /etc/default/tftpd-hpa TFTP_USERNAME="tftp" TFTP_DIRECTORY="/var/lib/tftpboot" TFTP_ADDRESS="0.0.0.0:69" TFTP_OPTIONS="--secure" steve@steve-VirtualBox:~$ ls -l /var/lib/tftpboot total 8204 -rw-r--r-- 1 root root 34352 May 28 08:22 am335x-boneblack.dtb -rw-r--r-- 1 root root 33206 May 28 08:22 am335x-bone.dtb -rw-r--r-- 1 root root 41564 May 28 08:22 am335x-evm.dtb -rw-r--r-- 1 root root 38048 May 28 08:22 am335x-evmsk.dtb -rwxr-xr-x 1 root root 4117904 May 20 09:39 zImage -rw-r--r-- 1 root root 4117616 May 28 08:22 zImage-am335x-evm.bin steve@steve-VirtualBox:~$ tftp localhost tftp> get zImage Transfer timed out. tftp> quit steve@steve-VirtualBox:~$ sudo service tftpd-hpa restart [sudo] password for steve: tftpd-hpa stop/waiting tftpd-hpa start/running, process 2106 steve@steve-VirtualBox:~$ tftp localhost tftp> get zImage Received 4143798 bytes in 1.4 seconds tftp> quit steve@steve-VirtualBox:~$

    Read the article

  • Error when starting qpidd as a service

    - by Sparks
    I have recently swapped from CENTOS 5 to FEDORA 17. Previously I have created my own init.d scripts successfully (albeit not for qpidd) however, in FEDORA I cannot get it to work. I have created the following script (called qpidd) in the init.d directory: #!/bin/bash # # /etc/rc.d/init.d/qpidd # # QPID/AMQP Broker scripts # # # chkconfig: 2345 20 80 # description: QPID/AMQP Broker service # processname: qpidd # pidfile: /var/lock/subsys/qpidd # Source function library. . /etc/init.d/functions SERVICENAME=qpidd start() { echo -n "Starting $SERVICENAME: " daemon qpidd -d & retval=$? touch /var/lock/subsys/$SERVICENAME return $retval } stop() { echo -n "Shutting down $SERVICENAME: " qpidd -q & retval=$? rm -f /var/lock/subsys/$SERVICENAME return $retval } case "$1" in start) start ;; stop) stop ;; status) status qpidd ;; restart) stop start ;; condrestart) [ -f /var/lock/subsys/<service> ] && restart || : ;; *) echo "Usage: $SERVICENAME {start|stop|status|restart" exit 1 ;; esac exit $? After this, I ran chkconfig --add qpidd, however, now when I run sudo service qpidd start I get the following message: Starting qpidd (via systemctl): Job failed. See system journal and 'systemctl status' for details. If I then run systemctl status qpidd I get the following message: Failed to issue method call: Unit name qpidd is not valid. I am now lost, I have search the web and Stack Overflow but cannot find anybody with similar problem, any help or direction to a website that can help would be much appreciated Sparks :)

    Read the article

  • Install IIS on Server 2003 unattended via PowerShell as a service user (no terminal session)

    - by maik
    I've been racking my brain with this for a bit and figured I would ask here to see if anyone could enlighten me. As the title says, I'm trying to install the IIS role on Server 2003 using an unattended install method launched via a service. We're using RightScale and most of what we want to accomplish is pretty straightforward. I created an unattend file for use with sysocmgr.exe: [Components] iis_common = ON iis_www = ON iis_www_vdir_scripts = ON iis_inetmgr = ON fp_extensions = ON iis_ftp = ON And I invoke it like so: sysocmgr.exe /i:%windir%\inf\sysoc.inf /u:C:\path\to\iis-unattend.txt /r /x /q If I run that from a command prompt while logged in as Administrator it works just fine, but if it runs via RightScript (the RightScale user on the server, which is a local admin) it fails somewhere in the middle and the logs I get are rather unhelpful. The thing is I can do this same thing with the SNMP Client (which is a Windows component, not a server role) and it works with no problems while run via the script service user. My best guess is that sysocmgr.exe is expecting a GUI element to be there during the role installation and since the service user has no terminal session it coughs and dies. That's just a wild stab in the dark.

    Read the article

  • Does NetworkSolutions have a good DNS service?

    - by joxl
    I'm recovering from a DNS disaster and I need some good advice on an alternate solution. My company owns a domain name through NetworkSolutions. Our website is hosted by another company who also maintains our DNS records. Our email is hosted by Google Apps, and the MX records are maintained through the afore-mentioned website/DNS host. Yesterday our website/DNS host had a serious hiccup in some software and completely overwrote all of our DNS records with invalid values; successfully pointing our domain and MX records at the wrong servers. Unfortunately it wasn't caught until it had time to significantly propagate. On top of that, it wasn't fixed until several hours later, combine that with a long TTL on the records; we have customers who are still bouncing emails. Anyhow, I am now completely terrified of this company's ability to do a good job, so I am considering switching to NetworkSolutions for our DNS service. I need the ability to configure A, CNAME, MX, and TXT records, preferably with a nice user interface (our current provider has a poor UI and doesn't support TXT records). Is NetworkSolutions a recommended DNS host? I am a little biased in their direction because the service will be free since we already pay them for our domain name. However I'm curious what others have experienced with their service.

    Read the article

  • IIS8 Asp.net State service remote connection failure

    - by maxisam
    Recently we upgrade our web server to windows server 2012 with IIS8. We have this issue when users try to connect the asp.net state service to this web server remotely. It always popup Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. In IIS7 / 7.5 we use the same way and it works fine. As long as the state service is running and firewall is set properly, we don't have any problem. However, in IIS8 it doesn't work. (We even turn off firewall to test it) Thanks for helping.

    Read the article

  • Adding custom service to nagiosgraph

    - by ravloony
    I have successfully added nagiosgraph to our nagios installation. I also added a memory checker plugin, from here : http://blog.vergiss-blackjack.de/2010/04/nagios-plugin-to-check-memory-consumption/. However I can't seem to get the graph of this service to be output by nagiosgraph. The plugin returns a single line like this: 31% (3785 of 11903 MB) used so i added a rule like this to the map file: /output:(\d+)% \((\d+) of (\d+) MB\) used/ and push @s, ['Mem', ['Percentage', 'GUAGE', $1], ['Used', 'GUAGE', $2], ['Total', 'GUAGE', $3] ]; I have also read this : http://www.mail-archive.com/[email protected]/msg36835.html and made sure that process_performance_data=1 in the nagios conf file. So far I have no graph for the Mem service on any host, and no rrd file either. I am unsure how to proceed to get this working. The documentation is rather difficult to follow and I haven't managed yet to understand it enough to do this. Can anyone point me to a tutorial, or some documentation which explains the steps needed to get a service noticed and graphed by nagiosgraph?

    Read the article

  • Create account for service

    - by Andy
    I am configuring a new server. The server is running Hudson that is going to copy some files from this server to another. The other server is a virtual machine. Both running Windows Server 2012. Hudson is started on server A with log on as "Local System". When I come to the copy phase it says "Access denied". Changing the log on to "Administrator" works. However, I guess this is bad. I do not have much experience with user management. I tried to create a own hudson account on both servers A and B. I tried to log on as hudson account in the service-management but it doesn't start. How would you create an account for this particular service that has access to the shared folder on server B and can be used to start the service on server A? I guess I need two accounts with same username and password on server A and server B? The folder on Server B is shared with everyone and the guest account is enabled.

    Read the article

  • IIS 7 and ASP.NET State Service Configuration

    - by Shawn
    We have 2 web servers load balanced and we wanted to get away from sticky sessions for obvious reasons. Our attempted approach is to use the ASP.NET State service on one of the boxes to store the session state for both. I realize that it's best to have a server dedicated to storing sessions but we don't have the resources for that. I've followed these instructions to no avail. The session still isn't being shared between the two servers. I'm not receiving any errors. I have the same machine key for both servers, and I've set the application ID to a unique value that matches between the two servers. Any suggestions on how I can troubleshoot this issue? Update: I turned on the session state service on my local machine and pointed both servers to the ip address on my local machine and it worked as expected. The session was shared between both servers. This leads me to believe that the problem might be that I'm not using a standalone server as my state service. Perhaps the problem is because I am using the ip address 127.0.0.1 on one server and then using a different ip address on the other server. Unfortunately when I try to use the network ip address as opposed to localhost the connection doesn't seem to work from the host server. Any insight on whether my suspicions are correct would be appreciated.

    Read the article

  • Multi- authentication scenario for a public internet service using Kerberos

    - by StrangeLoop
    I have a public web server which has users coming from internet (via HTTPS) and from a corporate intranet. I wish to use Kerberos authentication for the intranet users so that they would be automatically logged in the web application without the need to provide any login/password (assuming they are already logged to the Windows domain). For the users coming from internet I want to provide traditional basic/form- based authentication. User/password data for these users would be stored internally in a database used by the application. Web application will be configured to use Kerberos authentication for users coming from specific intranet ip networks and basic/form- based authentication will be used for the rest of the users. From a security perspective, are there some risks involved in this kind of setup or is this a generally accepted solution? My understanding is that server doesn't need access to KDC (see Kerberos authentication, service host and access to KDC) and it can be completely isolated from AD and corporate intranet. The server has a keytab file stored locally that is used to decrypt tickets sent by the users coming from intranet. The tickets only contain username and domain of the incoming user. Server never sees the passwords of authenticated users. If the server would be hacked and the keytab file compromised, it would mean that attacker could forge tickets for any domain user and get access to the web application as any user. But typically this is the case anyway if hacker gains access to the keytab file on the local filesystem. The encryption key contained in the keytab file is based on the service account password in AD and is in hashed form, I guess it is very difficult to brute force this password if strong Kerberos encryption like AES-256-SHA1 is used. As the server has no network access to intranet, even the compromised service account couldn't be directly used for anything.

    Read the article

  • Systemd Service Start With Dynamic Port Value From Docker

    - by Sheriffen
    Using CoreOS, Docker and systemd to manage my services I want to properly perform service discovery. Since CoreOS utilizes etcd (distributed key-value) there is a very convenient way to do this. On systemd's ExecStartPost I can just insert the started service into etcd without problems. My usecase needs something like this: ExecStartPost=/usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": 5555 }' which works like a charm. But this is where my idea popped up. Docker has the power to randomly assign a port if I just run docker run -p 5555 which is awesome since I don't have to set it statically in the *.service file and I could possibly run multiple instances on the same host. What if I could get the randomly assigned port and insert instead of the static 5555? Turns out I can use the docker port command to get the port and with some formatting we can get just the port with $(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2) which works if I set it (using bash) like this: /usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": '$(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2)' }' but using systemd I just can't get it to work. This is the code I'm using: ExecStartPost=/usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": '$(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2)'}' Somehow I got something wrong but it's hard to debug since it works when typed in the terminal.

    Read the article

  • Opscenter repair service times out. ERROR: Requested range intersects a local range [...]

    - by jlemire-zs
    My production cluster had the repair service enabled since april 16th with the default 9 days time to completion and repairs would complete properly. However, since may 22nd, it is being disabled automatically by Opscenter: From /var/log/opscenter/opscenterd.log: [...] 2014-06-03 21:13:47-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4019838962446882275L, -4006140687792135587L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service [...] From /var/log/opscenter/repair_service/zs_prod.log: [...] 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) has failed 1 times. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: 101 errors have ocurred out of 100 allowed. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service On the nodes on which the repair fails, from /var/log/cassandra/system.log: ERROR [RMI TCP Connection(93502)-10.1.0.22] 2014-06-03 20:12:28,858 StorageService.java (line 2560) Repair session failed: java.lang.IllegalArgumentException: Requested range intersects a local range but is not fully contained in one; this would lead to i mprecise repair at org.apache.cassandra.service.ActiveRepairService.getNeighbors(ActiveRepairService.java:164) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:128) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:117) at org.apache.cassandra.service.ActiveRepairService.submitRepairSession(ActiveRepairService.java:97) at org.apache.cassandra.service.StorageService.forceKeyspaceRepair(StorageService.java:2620) at org.apache.cassandra.service.StorageService$5.runMayThrow(StorageService.java:2556) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) These errors, which only occurs if the repair service is running, are the only errors these nodes experience. Outside of the repair task, the Cassandra cluster works perfectly. I am running Opscenter 4.1.2 with a 6 nodes DSE 4.0.2 cluster installed on linux virtual machines. The nodes run a vanilla installation of Ubuntu Server 12.04 64-bit and DSE was installed and secured according to the provided installation documentation. I have been experiencing that problem on my development cluster for a while too (with DSE 4.0.0, 4.0.1 and 4.0.2), but I thought this was because of some configuration error on my part. The problem has appeared spontaneously at some point too. The Cassandra cluster has been working very smoothly with a good write throughput. It is very stable and has enough resources to work with. We did not notice any problems with the applications that depend on it.

    Read the article

  • A server which uses 2 IPs and is needed to give service (under NAT)

    - by user6004
    I have an internal server, which uses a certain service. This service listens on a port, and speaks on a different port. The problem with the service is that it can't listen and speak on the same IP address, so I have configured 2 IP addresses for that NIC, and so I "solved" the problem with the listening and speaking. I have a problem though... I need that server to be NATed, with a public IP address, and that server needs to be available from the outside (and as only one IP)... The question is, how do I solve the situation here? If I do a NAT for one IP address (the listening port), then he will be able to get requests from the outside, but won't be able to send out traffic (because the other IP won't have NAT). If I do NAT on both of the IPs, then when traffic comes in for the listening port, it won't necessarily arrive to the listening IP, but rather to the speaking one. I hope I made myself clear and that there is a sensible solution here that I am missing.

    Read the article

  • Protocol (or service publish/discovery) to detect devices in network

    - by Gobliins
    we connect some embedded devices in a network. What i am looking for now, is a way to find the devices IP and identify them. We work with Windows PC´s and i am about to write a C# tool that should do this. I thought about send a udp broadcast and in the ack i.e. is the device´s ip, which would mean the device needs a daemon runnig to assign an ip itself. Running a service (like a printer) on the device, and on the PC just lookup for the service. I read about some things like apipa, zeroconf, ipv4 local link, bonjour, dns-sd, mdns, bonjour; They can automatically assign ip´s and publish services in a network. My Question is, can someone recommend me what would be good for my task? -The protocol or Service should be low on ressource (memory/cpu usage) use. -Are there some standard protocolls to use? -Is DNS a good idea or would it be to ressource consumpting just for finding a device´s IP? -Should also work when no dhcp servers are around. edit: To clarify a bit: The IP configuration is automatic. The problem to focus is how to tell the PC which IP in the network (or a direct connection in this vase there would only be one) belongs to the device (identity).

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Múltiples clientes en el mismo Oracle UCM

    - by [email protected]
    Estamos muy activos con la implantación de plataformas ECM que den servicio a múltiples clientes. Consiguiendo 2 objetivos muy importantes:El cliente final puede pagar al proveedor por una plataformas ECM como servicio (SaaS). Y, lógicamente, se ahorra en complejidad y gastos de infraestructura, administración, formación, almacenamiento, etc...Hemos estado explicando estos días el modelo Master-Proxy de Oracle UCM con el que podemos implantar este tipo de plataformas. No siempre será la solución más adecuada porque a veces vamos a querer disponer de plataformas compartidas, pero con clientes completamente aislados. Siempre, la consola de migración nos permite exportar e importar componentes, metadatos, contenidos, workflows, etc... para que elijamos el modelo más adecuado para cada caso.Pero, ¿Cómo funciona?. Podéis ver en la imágen que se basa en la instalación de varias instancias de UCM y configurarlas de forma que varias de ellas se comporten como "Master" (digo varias para conseguir alta disponibilidad), y el resto se comporten como "Proxy" (también varias instancias de UCM pueden comportarse como un mismo "Proxy" permitiendo balancear la carga en función de que cada cliente requiera más o menos rendimiento). Esta configuración (que vemos en la imágen adjunta), nos permite:Delegar la gestión de usuarios de cada cliente. Los usuarios del Master podrán acceder a todos los Proxies, pero los usuarios de cada proxy sólo acceden a su repositorio.Delegar funcionalidad y componentes. Es posible configurar diferentes funcionalidades en cada proxy de forma que algunos servidores estén especializados en Web Content Management, otros en Document Management (por ejemplo).Diferentes modelos de metadatos. Podemos modelar unos tipos documentales generales para toda la plataforma y otros particulares diferentes en cada UCM "Proxy".Conseguir una centralización de búsquedas y acceso a repositorios de documentación con diferentes juegos de caracteres. Un UCM Master puede centralizar la búsqueda en UCM's proxy que alberguen documentación en diferentes juegos de caracteres (por ejemplo un UCM para documentación de idiomas "Western European" (inglés, español, francés, alemán,...) y otro UCM proxy bajo juego de caracteres "Asian" (japones, coreano, chino,...).Fuentes:Toda la información detallada se encuentra en la documentación de Oracle UCM, aquí:http://download.oracle.com/docs/cd/E10316_01/ouc.htmY en concreto, lo relativo a plataformas, en el documento "Planning and Implementation Guide", aquí:plan_implement_guide_10en.pdf

    Read the article

  • Tellago is still hiring….

    - by gsusx
    Tellago 's SOA practice is rapidly growing and we are still hiring. In that sense, we are looking to for Connected Systems (WCF, BizTalk, WF) experts who are passionate about building game changing solutions with the latest Microsoft technologies. You will be working alongside technology gurus like DonXml , Pablo Cibraro or Dwight Goins . If you are interested and not afraid of working with a bunch of crazy people ;)please drop me a line at jesus dot rodriguez at tellago dot com. Hope to hear from...(read more)

    Read the article

  • We are hiring (take a minute to read this, is not another BS talk ;) )

    - by gsusx
    I really wanted to wait until our new website was out to blog about this but I hope you can put up with the ugly website for a few more days J. Tellago keeps growing and, after a quick break at the beginning of the year, we are back in hiring mode J. We are currently expanding our teams in the United States and Argentina and have various positions open in the following categories. .NET developers: If you are an exceptional .NET programmer with a passion for creating great software solutions working...(read more)

    Read the article

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or [email protected].

    Read the article

  • Thinking of Adopting the PRINCE2™ Project Management Methodology? Consider Using PeopleSoft Projects to Help

    - by Megan Boundey
    Ever wondered what the PRINCE2™ project management methodology is? Ever wondered if you could use PeopleSoft Projects (ESA) to manage your projects using PRINCE2™?  Published by the Office of Government Commerce in the UK, PRINCE2™ is a scalable, business case and product description-driven Project Management methodology based upon managing by exception. Project activities are organized around fulfilling and meeting the product description. Quality assurance, configuration control and risk management are all based upon ensuring that the product delivered accurately meets the product description. PRINCE2™ is built upon seven principles and seven themes, each underpinning the PRINCE2™project management processes. Important for today’s business environment, the focus throughout PRINCE2™ is on the Business Case, which describes the rationale and business justification for a project. The Business Case drives all the project management processes from initial project setup to successful finish. PRINCE2™, as a method and a certification, is adopted in many countries worldwide, including the UK, Western Europe and Australia. We’ve just released a new white paper, which provides you with an overview of the principles, themes and project management processes associated with PRINCE2™. It also shows how these map to the functionality available within PeopleSoft Projects (ESA). In the time it takes to drink a coffee, you can learn about PRINCE2™ and determine whether it might help you deliver better project results. We encourage you to take a look.

    Read the article

  • SQL Server 2008 R2 Cumulative Updates are available

    - by AaronBertrand
    Microsoft has released cumulative updates for SQL Server 2008 R2. SQL Server 2008 R2 SP1 Cumulative Update #8 KB article is http://support.microsoft.com/kb/2723743 Build number is 10.50.2822.0 There are 20 fixes published as of 2012-08-31 This update is relevant for builds between 10.50.2500 and 10.50.2820 Note that the page that lists builds and updates for SP1 seems confused; it currently states that the build is 10.50.2822, while the KB article shows 10.50.2821. The file from the hotfix is 10.50.2822,...(read more)

    Read the article

  • Oracle Consulting North America is now live on PeopleSoft Services Procurement and PeopleSoft Resource Management

    - by Howard Shaw
    Last month, Oracle's own internal consulting group (OCS North America) went live on PeopleSoft Services Procurement and PeopleSoft Resource Management to manage all aspects of identifying, recruiting, and deploying billable subcontractors on North America Applications customer consulting projects. The primary goals were to enhance the subcontractor staffing process, improve operational and informational processes, and improve collaboration between the Oracle NA Consulting Subcontractor Program and subcontractor suppliers. Over 200 registered external suppliers access the tool, review open needs and competitively bid their resources to work on NA Applications projects. This implementation highlights the usage of Oracle’s own solutions to streamline and enhance business operations, as the PeopleSoft 9.1 applications (Services Procurement and Resource Management) were deployed using Sun hardware, Oracle Enterprise Linux, and Oracle Virtual Machines.For more information, please navigate to the following web pages: PeopleSoft Services Procurement PeopleSoft Resource Management

    Read the article

  • Business guy building a software company [closed]

    - by Dreamer
    I am a business guy who is about to embark on a very risky journey to start this own software company. I have done sales for several software companies and in the last 8 years, I have managed to generate over $15 million in pure SAAS revenues for my employers. I think now its time to do it for myself and see where I can take the business. I have an idea in mind which I would like to develop and have been speaking with several companies who I may hire to convert that idea into a SAAS based offering. I am scared of the following: Being ripped off as I have no technical knowledge Over-charged Building something and realizing the foundation was weak, not scalable etc. Can anyone help me identify what I need to do before I sign a software development company to start my project. What do I need to know? What is the typical cost? What is a realistic time frame? Which coding language is better? What steps can I take to prevent myself from being ripped-off?

    Read the article

  • Which hosted ecommerce solutions allow customization?

    - by Diego
    Following my previous question, I'm now evaluating the possibility of using a hosted platform for the ecommerce project I have to implement. Before I start "playing" with each one of them, I'd like ask if anybody knows which ones allow a good degree of customization. At the moment I'm looking at BigCommerce, but it seems that customization is limited to templates, while I need additional features which require PHP Coding. Also, I'd need to be able to import additional product data into the system, and I'd need to do this via code; I had a look at some integrations, but they gave me the impression that they all run on the rendered page via JavaScript. For example, if I want to show Facebook Reviews on a product, I'll have to add some JS that will fetch it and show it on the page. This is not optimal, as I must cater for people with JS disabled, therefore I'd need to run my own PHP code. Thanks again for all the opinions.

    Read the article

  • Is osTicket secure/private enough

    - by Andy
    I was going to use osTicket as my 'help desk' for my website, however I just got a little bit concerned when I realised that the clients' login details to see their support tickets are only their email address and a ticket ID. I am probably going over the top with security though, which is why I wanted to get some second opinions on how secure osTicket actually is and whether I should use it with my website. I run a software company, so chances are licence keys may be included in support tickets which are obviously sensitive information and valuable - so I want to ensure that the likelihood of a support ticket being hacked is very low. If there is any plugins/additions to make osTicket more 'secure', I would appreciate it if you could point me to them. Otherwise if there are any more free, more suited, help desk softwares out there please let me know. Thanks in advance

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >