Search Results

Search found 12909 results on 517 pages for 'domain'.

Page 436/517 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [code]

    - by Your DisplayName here!
    You can download the full solution here. The relevant parts in the sample are: Configuration I use the standard WIF configuration with passive redirect. This kicks automatically in, whenever authorization fails in the application (e.g. when the user tries to get to an area the requires authentication or needs registration). Checking and transforming incoming claims In the claims authentication manager we have to deal with two situations. Users that are authenticated but not registered, and registered (and authenticated) users. Registered users will have claims that come from the application domain, the claims of unregistered users come directly from ACS and get passed through. In both case a claim for the unique user identifier will be generated. The high level logic is as follows: public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal) {     // do nothing if anonymous request     if (!incomingPrincipal.Identity.IsAuthenticated)     {         return base.Authenticate(resourceName, incomingPrincipal);     } string uniqueId = GetUniqueId(incomingPrincipal);     // check if user is registered     RegisterModel data;     if (Repository.TryGetRegisteredUser(uniqueId, out data))     {         return CreateRegisteredUserPrincipal(uniqueId, data);     }     // authenticated by ACS, but not registered     // create unique id claim     incomingPrincipal.Identities[0].Claims.Add( new Claim(Constants.ClaimTypes.Id, uniqueId));     return incomingPrincipal; } User Registration The registration page is handled by a controller with the [Authorize] attribute. That means you need to authenticate before you can register (crazy eh? ;). The controller then fetches some claims from the identity provider (if available) to pre-fill form fields. After successful registration, the user is stored in the local data store and a new session token gets issued. This effectively replaces the ACS claims with application defined claims without requiring the user to re-signin. Authorization All pages that should be only reachable by registered users check for a special application defined claim that only registered users have. You can nicely wrap that in a custom attribute in MVC: [RegisteredUsersOnly] public ActionResult Registered() {     return View(); } HTH

    Read the article

  • How quickly to leave contract-to-hire gig where you don't want to be hired? [closed]

    - by nono
    So you move to a big new city with tons of software development opportunity, having taken a six month contract-to-hire job. The company treats you really well and has a good team and work environment. However, the recruiter assured you when offering the gig that it would be a good position in which you can advance your learning from more senior developers (a primary concern of yours) but you're starting to realize that a job recruiter isn't going to understand that the team in question isn't very up on modern software practices (you start to sympathize with this guy and read his post over and over again: http://stackoverflow.com/questions/1586166/career-killer-nhibernate-oop-design-patterns-domain-driven-design-test-driv) and that much of the company's software is very old and very very poorly architected, and the company (like so many others) seems to be only concerned with continually extending the software without investing in any structural improvements. You're absolutely dismayed at how long it takes your team (including) to fulfill simple feature requests (maybe 500-1000% longer than with better designed software that you've worked on in the past), but no one else there seems to think anything of it. You find that the work and the company's business are intensely uninteresting to you, but due to the convoluted design of their various software systems, fulfilling the work will require as much mental engagement as any other development position. You feel a bit naive about not having asked the right questions during your interview process, and for not having anticipated that your team at your former podunk company might possibly be light-years ahead of any team in Big Shiny City, but you know you don't want to stay at this place, and (were it not for your personal, after-hours studying and personal programming efforts) fear that you might actually give a worse interview after completing your 6 months than you did when you started at the place. You read about how hard of a time local companies are having filling their positions with qualified software development candidates. You read all sorts of fabulous sounding job postings online and feel like you're really missing out. In spite of the comfortable environment you feel like you would willingly accept a somewhat more demanding or aggressive lifestyle to feel like you are learning and progressing and producing something meaningful. My questions are: how quickly do you leave and how do you go about giving a polite reason for departing? The contract is written to allow them to "can" you and to allow you to leave with 2 weeks notice. Do you ethically owe the 6 months? Upon taking the position, the company told you they were not interested in candidates who were intending to only stay for 6 months and then leave (you were not intending to bail after 6 months, at that time), so perhaps they might be fine if you split now, knowing that you don't want to stick around for the full time hire?

    Read the article

  • Upgrading Oracle Enterprise Manager: 12c to 12c Release 2

    - by jorge_neidisch
    1 - Download the OEM 12c R2. It can be downloaded here: http://www.oracle.com/technetwork/oem/grid-control/downloads/index.html  Note: it is a set of three huge zips. 2 - Unzip the archives 3 - Create a directory (as the oem-owner user) where the upgraded Middleware should be installed. For instance: $ mkdir /u01/app/oracle/Middleware12cR2 4 - Back up OMS (Middleware home and inventory), Management Repository and Software Library. http://docs.oracle.com/cd/E24628_01/doc.121/e24473/ha_backup_recover.htm#EMADM10740 5 - Ensure that the Management tables don't have snapshots:  SQL> select master , log_table from all_mview_logs where log_owner='<EM_REPOS_USER>If there are snapshots drop them:  SQL> Drop snapshot log on <master> 6 - Copy emkey  from existing OMS:  $ <OMS_HOME>/bin/emctl config emkey -copy_to_repos [-sysman_pwd <sysman_pwd>]To verify whether the emkey is copied, run the following command: $ <OMS_HOME>/bin/emctl status emkeyIf the emkey is copied, then you will see the following message:The EMKey  is configured properly, but is not secure.Secure the EMKey by running "emctl config emkey -remove_from_repos". 7 - Stop the OMS and the Agent $ <OMS_HOME>/bin/emctl stop oms $ <AGENT_HOME>/bin/emctl stop agent 8 - from the unzipped directory, run $ ./runInstaller 8a - Follow the wizard: Email / MOS; Software Updates: disable or leave empty. 8b - Follow the wizard:  Installation type: Upgrade -> One System Upgrade. 8c - Installation Details: Middleware home location: enter the directory created in step 3. 8d - Enter the DB Connections Details. Credentials for SYS and SYSMAN. 8e - Dialog comes: Stop the Job Gathering: click 'Yes'. 8f- Warning comes: click 'OK'. 8g - Select the plugins to deploy along with the upgrade process 8h- Extend Weblogic: enter the password (recommended, the same password for the SYSMAN user). A new directory will be created, recommended: /u01/app/oracle/Middleware12cR2/gc_inst 8i - Let the upgrade proceed by clicking 'Install'. 8j - Run the following script (as root) and finish the 'installation':  $ /u01/app/oracle/Middleware12R2/oms/allroot.sh 9 - Turn on the Agent:  $ <AGENT_HOME>/bin/emctl start agent  Note that the $AGENT_HOME might be located in the old Middleware directory:  $ /u01/app/oracle/Middleware/agent/agent_inst/bin/emctl start agent 10 - go to the EM UI. Select the WebLogic Target and choose the option "Refresh WebLogic Domain" from the menu. 11 - Update the Agents: Setup -> Manage Cloud Control -> Upgrade Agents -> Add (+) Note that the agents may take long to show up. ... and that's it! Or that should be it !

    Read the article

  • What ever happened to the Defense Software Reuse System (DSRS)?

    - by emddudley
    I've been reading some papers from the early 90s about a US Department of Defense software reuse initiative called the Defense Software Reuse System (DSRS). The most recent mention of it I could find was in a paper from 2000 - A Survey of Software Reuse Repositories Defense Software Repository System (DSRS) The DSRS is an automated repository for storing and retrieving Reusable Software Assets (RSAs) [14]. The DSRS software now manages inventories of reusable assets at seven software reuse support centers (SRSCs). The DSRS serves as a central collection point for quality RSAs, and facilitates software reuse by offering developers the opportunity to match their requirements with existing software products. DSRS accounts are available for Government employees and contractor personnel currently supporting Government projects... ...The DoD software community is trying to change its software engineering model from its current software cycle to a process-driven, domain-specific, architecture-based, repository-assisted way of constructing software [15]. In this changing environment, the DSRS has the highest potential to become the DoD standard reuse repository because it is the only existing deployed, operational repository with multiple interoperable locations across DoD. Seven DSRS locations support nearly 1,000 users and list nearly 9,000 reusable assets. The DISA DSRS alone lists 3,880 reusable assets and has 400 user accounts... The far-term strategy of the DSRS is to support a virtual repository. These interconnected repositories will provide the ability to locate and share reusable components across domains and among the services. An effective and evolving DSRS is a central requirement to the success of the DoD software reuse initiative. Evolving DoD repository requirements demand that DISA continue to have an operational DSRS site to support testing in an actual repository operation and to support DoD users. The classification process for the DSRS is a basic technology for providing customer support [16]. This process is the first step in making reusable assets available for implementing the functional and technical migration strategies. ... [14] DSRS - Defense Technology for Adaptable, Reliable Systems URL: http://ssed1.ims.disa.mil/srp/dsrspage.html [15] STARS - Software Technology for Adaptable, Reliable Systems URL: http://www.stars.ballston.paramax.com/index.html [16] D. E. Perry and S. S. Popovitch, “Inquire: Predicate-based use and reuse,'' in Proceedings of the 8th Knowledge-Based Software Engineering Conference, pp. 144-151, September 1993. ... Is DSRS dead, and were there any post-mortem reports on it? Are there other more-recent US government initiatives or reports on software reuse?

    Read the article

  • How to run WordPress and Java web app running on Tomcat on the same server?

    - by Chantz
    I have to run a WordPress site served via Apache2 & Java-based webapp using Tomcat on the same server. When users come to example.com or example.com/public-pages they need to served from WordPress but when they come to example.com/private-pages they need to be served from the Tomcat. I have asked this question on serverfault where they suggested using different port, different IP & sub-domain. I want to go for different port solution since it will mean I need to buy only one SSL certificate. I tried doing the reverse proxy method by having the following in my default-ssl.conf <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName localhost:443 DocumentRoot /var/www <Directory /var/www> #For Wordpress Options FollowSymLinks AllowOverride All </Directory> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass /private-pages ajp://localhost:8009/ ProxyPassReverse /private-pages ajp://localhost:8009/ SSLEngine on SSLProxyEngine On SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key </VirtualHost> As you have noticed I am using mod_proxy_ajp in Apache2 for this. And that my Tomcat is listening to port 8009 and then serving content. So now when I go to example.com/private-pages I am seeing the content from my Tomcat. But 2 issues are happening. All my static resources are getting 404-ed, so none of my images, CSS, js are getting loaded. I see that the browser is requesting for the resources using URL example.com/css/* This will clearly not work because it translates to example.com:80/css/* instead of example.com:8009/css/* & there are no such resources in the WordPress directory. If I go to example.com/private-pages/abcd I am somehow kicked to the WordPress site (which obviously displays a 404 page). I can understand why #1 is happening but have no clue why the #2 is happening. Regardless, if there is another clean solution for resolving this, I would appreciate y'alls help.

    Read the article

  • JGridView

    - by Geertjan
    JGrid was announced last week so I wanted to integrate it into a NetBeans Platform app. I.e., I'd like to use Nodes instead of the DefaultListModel that is supported natively, so that I can integrate with the Properties Window, for example: Here's how: import de.jgrid.JGrid; import java.beans.PropertyVetoException; import javax.swing.DefaultListModel; import javax.swing.JScrollPane; import javax.swing.event.ListSelectionEvent; import javax.swing.event.ListSelectionListener; import org.book.domain.Book; import org.openide.explorer.ExplorerManager; import org.openide.nodes.Node; import org.openide.util.Exceptions; public class JGridView extends JScrollPane { @Override public void addNotify() { super.addNotify(); final ExplorerManager em = ExplorerManager.find(this); if (em != null) { final JGrid grid = new JGrid(); Node root = em.getRootContext(); final Node[] nodes = root.getChildren().getNodes(); final Book[] books = new Book[nodes.length]; for (int i = 0; i < nodes.length; i++) { Node node = nodes[i]; books[i] = node.getLookup().lookup(Book.class); } grid.getCellRendererManager().setDefaultRenderer(new OpenLibraryGridRenderer()); grid.setModel(new DefaultListModel() { @Override public int getSize() { return books.length; } @Override public Object getElementAt(int i) { return books[i]; } }); grid.setUI(new BookshelfUI()); grid.getSelectionModel().addListSelectionListener(new ListSelectionListener() { @Override public void valueChanged(ListSelectionEvent e) { //Somehow compare the selected item //with the list of books and find a matching book: int selectedIndex = grid.getSelectedIndex(); for (int i = 0; i < nodes.length; i++) { String nodeName = books[i].getTitel(); if (String.valueOf(selectedIndex).equals(nodeName)) { try { em.setSelectedNodes(new Node[]{nodes[i]}); } catch (PropertyVetoException ex) { Exceptions.printStackTrace(ex); } } } } }); setViewportView(grid); } } } Above, you see references to OpenLibraryGridRenderer and BookshelfUI, both of which are part of the "JGrid-Bookshelf" sample in the JGrid download. The above is specific for Book objects, i.e., that's one of the samples that comes with the JGrid download. I need to make the above more general, so that any kind of object can be handled without requiring changes to the JGridView. Once you have the above, it's easy to integrate it into a TopComponent, just like any other NetBeans explorer view.

    Read the article

  • How do I server multiple domains from the same directory and codebase without my configuraton breaking when apache.conf is overwritten?

    - by neokio
    I have 20 domains on a VPS running cPanel. One public_html is filled with code, the remaining 19 are symbolic links to that one. (For example, assets is a directory within public_html ... for the 19 others, there's a symbolic link to that directory in each each accounts public_html dir.) It's all PHP / MySQL database driven, with content changing depending on the domain. It works like a charm, assuming cPanel has suExec enabled correctly, and assuming apache.conf does NOT have SymLinksIfOwnerMatch enabled. However, every few weeks, my apache.conf is mysteriously overwritten, re-enabling SymLinksIfOwnerMatch, and disabling all 19 linked sites for as long as it takes for me to notice. Here's the offending line in apache.conf: <Directory "/"> AllowOverride All Options ExecCGI FollowSymLinks IncludesNOEXEC Indexes SymLinksIfOwnerMatch </Directory> The addition of SymLinksIfOwnerMatch disables the sites in a strange way ... the html is generated correctly, but all css/js/image in the html fails to load. Clicking any link redirects to /. And I have no idea why. I do have a few things in my .htaccess, which work fine when SymLinksIfOwnerMatch is not present: <IfModule mod_rewrite.c> # www.example.com -> example.com RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L] # Remove query strings from static resources RewriteRule ^assets/js/(.*)_v(.*)\.js /assets/js/$1.js [L] RewriteRule ^assets/css/(.*)_v(.*)\.css /assets/css/$1.css [L] RewriteRule ^assets/sites/(.*)/(.*)_v(.*)\.css /assets/sites/$1/$2.css [L] # Block access to hidden files and directories RewriteCond %{SCRIPT_FILENAME} -d [OR] RewriteCond %{SCRIPT_FILENAME} -f RewriteRule "(^|/)\." - [F] # SLIR ... reroute images to image processor RewriteCond %{REQUEST_URI} ^/images/.*$ RewriteRule ^.*$ - [L] # ignore rules if URL is a file RewriteCond %{REQUEST_FILENAME} !-f # ignore rules if URL is not php #RewriteCond %{REQUEST_URI} !\.php$ # catch-all for routing RewriteRule . index.php [L] </ifModule> I also use most of the 5G Blacklist 2013 for protection against exploits and other depravities. Again, all of this works great, except when SymLinksIfOwnerMatch gets added back into apache.conf. Since I've failed to find the cause of whatever cPanel/security update is overwriting apache.conf, I thought there might be a more correct way to accomplish my goal using group permissions. I've created a 'www' group, added all accounts to the group, and chmod -R'd the code source to use that group. Everything is 644 or 755. But doesn't seem to be enough. My unix isn't that strong. Do you need to restart something for group changes to take effect? Probably not. Anyways, I'm entering unknown territory. Can anyone recommend the right way to configure a website for multiple sites using one codebase that doesn't rely on apache.conf?

    Read the article

  • Should I use my own public API on my site (via JS)?

    - by newboyhun
    First of all, this question is far more different other 'public api questions' like this: Should a website use its own public API?, second, sorry for my English. You can find the question summarized at the bottom of this question. What I want to achieve is a big website with a public api, so who like programming (like me) and likes my website, can replicate my website's data with a much better approach (of course with some restrictions). Almost everything could be used by the public API. Because of this, I was thinking about making the whole website AJAX driven. There would be parts of the API which would be limited only to my website (domain), like login, registering. There would be only an INTERFACE on the client side, which would use the public and private API to make this interface working. The website would be ONLY CLIENT SIDE, well, I mean, the website would only use AJAX to use the api. How do I imagine this? The website would be like a mobile application, the application only sending a request to a webserver, which returns a json, the application parses it, and uses it to advance in the application. (e.g.: login) My thoughts: Pros: The whole website is built up by javascript, this means I don't need to transfer the html to the client, saving bandwidth. (I hope so) Anyone can use up the data of my website to make their own cool things. (Is this a con or pro? O_O) The public API is always in use, so I can see if there are any error. Cons: Without Javascript the website is unusable. The bad guys easily can load the server with requesting too much data (like Request Per Second 10000), but this can be countered via limiting this with some PHP code and logging. Probably much more work So the question in some words is: Should I build my website around my own api? Is it good to work only on the client side? Is this good for a big website? (e.x.: facebook, yeah facebook is a different story, but could it run with an 'architecture' like this?)

    Read the article

  • PXE boot linux. PXE-E51: No DHCP or proxyDHCP offers were received

    - by athspk
    I am trying to have an ubuntu box (192.168.10.9) acting as a PXE server, but i have trouble getting DHCP to work. The PXE server is connected to a SOHO router (192.168.10.1) acting as a switch. I have disabled the DHCP server on the router. $ dhcpd --version isc-dhcpd-4.2.4 The contents of /etc/dhcp/dhcpd.conf ddns-update-style none; option domain-name-servers 192.168.10.1; default-lease-time 3600; max-lease-time 7200; authoritative; log-facility local7; allow booting; allow bootp; subnet 192.168.10.0 netmask 255.255.255.0 { range dynamic-bootp 192.168.10.101 192.168.10.200; option routers 192.168.10.1; option broadcast-address 192.168.10.255; next-server 192.168.10.9; filename "/tftpboot/pxelinux.0"; } The contents of /etc/default/isc-dhcp-server INTERFACES="eth0" When the client boots, it tries to get an IP address from the server but fails with the following Error message: PXE-E51: No DHCP or proxyDHCP offers were received. On Server side, i was tailing /var/log/syslog while the client tries to boot: Dec 4 12:57:10 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:11 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:12 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:12 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:17 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:17 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:25 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:25 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Please advise. Thanks in advance

    Read the article

  • What exactly is a X-YMailISG header?

    - by iainH
    Finally ... our emails are being seen by Yahoo! not as junk anymore. Hurray! However I notice that the Yahoo! receiving MTA adds in a X-YMailISG header. It's very large ... 2**10 bits? Now that I've invested too large a chunk of my waking life in crafting our email headers I'm curious to know what an X-YMailISG header is. Can anybody tell me? Does it pose any security / authenticity issues? There's very little intelligible from Google results. Background: After many days tweaking TXT records in our domain's DNS zone file for SPF and DKIM, I have at last succeeded in generating email from our Drupal site that Yahoo! no longer marks as X-YahooFilteredBulk and the excellent service [email protected] returns results that show the emails passing SPF, DKIM and Sender-ID checks and appearing to SpamAssassin as ham. Yahoo! even adds a Received-SPF: pass header. Useful links: http://www.goldfisch.at/knowwiki/howtos/dkim-filter http://old.openspf.org/wizard.html Strangely enough the SPF TXT record needed / allowed a blank key / name field in our registrar's DNS management panel whereas the DKIM record needed the {selector}._domainkey as the key /name of the DKIM strings.

    Read the article

  • SCCM 2012 - Windows 8 WSUS

    - by Owen
    We're using SCCM 2012 to deploy Windows Updates on our domain, and our Windows 8 clients have started failing with error 80240438 when they try to update. Windows 7 clients update fine, but Windows 8 clients refuse to do anything. I've done a search online and it seems to only reference Windows InTune. Has anyone seen any similar behavior on Windows 8 machines? If we don't get that error, we're getting 80244021 which seems to indicate that the server can't be found.... but they can resolve it just fine and our exceptions are defined on the proxy too. A bit stuck here! 2012-11-22 14:45:28:935 476 998 Agent ********* 2012-11-22 14:45:28:935 476 998 Agent ** END ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-11-22 14:45:28:935 476 998 Agent ************* 2012-11-22 14:45:28:935 476 998 Agent WARNING: WU client failed Searching for update with error 0x80240438 2012-11-22 14:45:28:935 476 c74 AU >>## RESUMED ## AU: Search for updates [CallId = {EAECB947-48AC-43BE-8F98-C44727E4A131} ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}] 2012-11-22 14:45:28:935 476 c74 AU # WARNING: Search callback failed, result = 0x80240438 2012-11-22 14:45:28:935 476 c74 AU ######### 2012-11-22 14:45:28:935 476 c74 AU ## END ## AU: Search for updates [CallId = {EAECB947-48AC-43BE-8F98-C44727E4A131} ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}] 2012-11-22 14:45:28:935 476 c74 AU ############# 2012-11-22 14:45:28:935 476 c74 AU All AU searches complete. 2012-11-22 14:45:28:935 476 c74 AU # WARNING: Failed to find updates with error code 80240438 2012-11-22 14:45:28:935 476 c74 AU AU setting next detection timeout to 2012-11-22 04:12:23 2012-11-22 14:45:33:936 476 c9c Report REPORT EVENT: {EE35CD79-FD2A-472D-BFC9-0420F5D60C04} 2012-11-22 14:45:28:935+1300 1 148 [AGENT_DETECTION_FAILED] 101 {00000000-0000-0000-0000-000000000000} 0 80240438 AutomaticUpdates Failure Software Synchronization Windows Update Client failed to detect with error 0x80240438. 2012-11-22 14:45:33:938 476 c9c Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2012-11-22 14:45:33:938 476 c9c Report WER Report sent: 7.8.9200.16420 0x80240438 00000000-0000-0000-0000-000000000000 Scan 101 Managed 2012-11-22 14:45:33:938 476 c9c Report CWERReporter finishing event handling. (00000000) Thanks in advance

    Read the article

  • Cannot create a project TFS 2012 - TF218027

    - by GrandMasterFlush
    I've just installed TFS2012 and am trying to create a new project in the default collection via Visual Studio 2012 but I keep getting this error message: TF218027: The following reporting folder could not be created on the server that is running SQL Server Reporting Services: /TfsReports/DefaultCollection. The report server is located at: http://<servername>/Reports. The error is: The permissions granted to user '<domain>/grandmasterflush' are insufficient for performing this operation.. Verify that the path is correct and that you have sufficient permissions to create the folder on that server and then try again. I've checked the permissions and my user is a member of the Project Collection Administrators and the Project Collection Administrators group has the 'Create new project' permission set to allow. The only thing I think it might be is that the user that I created during installation for the Sharepoint access and reports viewing does not have permission to write to the reports folder, however if I select "Do not configure a SharePoint site at this time" then I still get the error messages. I can't find the reports folder to check the permissions either. TFS is using an instance of SQL 2012 that was already on the machine when TFS was installed. Can anyone see what I'm doing wrong please?

    Read the article

  • Windows CE 6.0 time setting in registry being overrided

    - by JaminSince83
    I have asked this question on stack overflow but its probably better suited here. So I have a Motorola MC3190 Mobile Barcode scanning device with Windows CE 6.0. Now I want to get the device to sync its date/time on boot up with our domain controller using a registry file that I have created. I have used this registry file below to get close to what I require. REG 1 REGEDIT4 [HKEY_LOCAL_MACHINE\Services\TIMESVC] "UserProcGroup"=dword:00000002 "Flags"=dword:00000010 "multicastperiod"=dword:36EE80 "threshold"=dword:5265C00 "recoveryrefresh"=dword:36EE80 "refresh"=dword:5265C00 "Context"=dword:0 "Autoupdate" = dword:1 "server" = "NAMEOFMYSERVER" "ServerRole" = dword:0 "Trustlocalclock" = dword:0 "Dll"="timesvc.dll" "Keep"=dword:1 "Prefix"="NTP" "Index"=dword:0 [HKEY_LOCAL_MACHINE\nls] "DefaultLCID" = dword:00000809 [HKEY_LOCAL_MACHINE\nls\overrides] "LCID" = dword:00000809 [HKEY_LOCAL_MACHINE\Time] @ = "UTC" "TimeZoneInformation"=hex:\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 [HKEY_LOCAL_MACHINE\Time Zones] @ = "UTC" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Clock] "AutoDST" = dword:00000000 Now it gets the correct date and shows the time zone correctly however the time is always 5 hours behind on Eastern Standard Time, which is really annoying. I have researched heavily into this and this question has been asked before here As you will see I have copied what it suggests but it doesnt work. Something is overiding the time which I dont understand enough about to resolve. I cannot find any other setting to get it to set the time correctly. Any help would be greatly appreciated.

    Read the article

  • Outlook 2007 unable to connect to exchange server

    - by mattwarren
    I came into the office a week ago and outlook has refused to connect ever since, it just says "Disconnected" in the bottom right-hand corner. I've tried restarting it, rebooting Windows etc. I'm the only one if our office who is having this problem, so it's not a general problem with the server. Things I've tried Pinging the server via IP address and host name, both work fine Connecting via OWA, this works using the same machine name Connecting to Exchange via HTTP ("Outlook Anywhere") doesn't work None of the suggestions in this question helped, http://serverfault.com/questions/21755/can-ping-exchange-server-cant-connect-outlook-to-it Disabling Windows firewall on my laptop also has no effect. There are no items in the event viewer that indicate that anything it up. Also no permissions have changed on the server since when it worked. Update: I've just tried logging onto a completely different PC, using my domain controller user-name/password. When I setup outlook on there it also fails, so the problem isn't specific to 1 pc/outlook it's something about my particular user name, but not when using OWA? What else can I do to diagnose this, any suggestions?

    Read the article

  • Plesk Postfix SMTP 550 5.7.1 "Command rejected" for one external sender

    - by Mnebuerquo
    My server is rejecting emails from one external sender. I suspect this might be misconfiguration on the sending server, but I'm not sure from these error messages. The non-delivery report message the sender gets contains this text: #5.7.1 smtp;550 5.7.1 Command rejected> #SMTP# I also see this message in /var/messages at about the same time as the rejection message was sent, though I'm not sure if it's actually related: Nov 29 12:29:28 localhost postfix/smtpd[31829]: sql_sqlite3 plugin: no result found I'm using Plesk 10.4.4 Update #47, Centos 6.2, Postfix 2.8.4-11100615 on my mail server. This is only happening for one sender so far, but I found a Google result on experts-exchange.com which seemed to identify the same problem and with the same sending domain. This was posted back in June, and currently has no answers, so even if I was a paying customer it wouldn't be answered. (http://www.experts-exchange.com/Software/Server_Software/Email_Servers/Q_27760746.html) The generating server is bigfish.com. What I need to determine is if this is a problem on my server or a problem with bigfish.com. Is there more information I can find in config files, logs, etc. to figure this out?

    Read the article

  • Transfer all 1&1 web and e-mail services to own Synology NAS using No-IP for DDNS

    - by Neo
    I have a domain x-treem.net. The registrar is DomainDiscover and I have a hosting package with 1&1 which includes web and e-mail. I also have an additional package with 1&1 - Microsoft Exchange which centralises all my e-mails, tasks, contacts, notes, etc. and I connect to it with my PC (Outlook) and my Android phone. I have just purchased a Synology NAS (DS213) and I can see I can run a web server (Web Station), e-mail server (Mail Server) on it amongst other things. I am behind a dynamic IP. So, I'm looking to get some clarification on what I must do to consolidate my services and make use of my NAS to do as much as possible and save third-party hosting costs. My registrar specifies nameservers as NS45.1AND1.CO.UK and NS46.1AND1.CO.UK. The MX record is mx00.1and1.co.uk and mx01.1and1.co.uk. I'm aware of the concept of DDNS and I am looking at using No-IP.com for this. This is where I need clarification. If I registered with the No-IP paid service and pointed my registrar to No-IP's nameservers, and used the DDNS support on my NAS (which supports No-IP), then any requests to x-treem.net would go to my NAS. Is that correct? Therefore, web requests would hit the web server on my NAS, and e-mails would hit the mail server on my NAS? So, given all of the above, I can then drop 1&1 completely and use my NAS for everything. I use MySQL, phpMyAdmin, phpBB on 1&1 all of which the Synology NAS appears to support in its available packages. As for Microsoft Exchange, Synology offers Zafara which appears to be a drop-in replacement for Exchange. Am I on the right track here, or is there anything I am missing?

    Read the article

  • How can I set IIS Application Pool recycle times without resorting to the ugly syntax of Add-WebConfiguration?

    - by ObligatoryMoniker
    I have been scripting the configuration of our IIS 7.5 instance and through bits and pieces of other peoples scripts I have come up with a syntax that I like: $WebAppPoolUserName = "domain\user" $WebAppPoolPassword = "password" $WebAppPoolNames = @("Test","Test2") ForEach ($WebAppPoolName in $WebAppPoolNames ) { $WebAppPool = New-WebAppPool -Name $WebAppPoolName $WebAppPool.processModel.identityType = "SpecificUser" $WebAppPool.processModel.username = $WebAppPoolUserName $WebAppPool.processModel.password = $WebAppPoolPassword $WebAppPool.managedPipelineMode = "Classic" $WebAppPool.managedRuntimeVersion = "v4.0" $WebAppPool | set-item } I have seen this done a number of different ways that are less terse and I like the way this syntax of setting object properties looks compared to something like what I see on TechNet: Set-ItemProperty 'IIS:\AppPools\DemoPool' -Name recycling.periodicRestart.requests -Value 100000 One thing I haven't been able to figure out though is how to setup recycle schedules using this syntax. This command sets ApplicationPoolDefaults but is ugly: add-webconfiguration system.applicationHost/applicationPools/applicationPoolDefaults/recycling/periodicRestart/schedule -value (New-TimeSpan -h 1 -m 30) I have done this in the past through appcmd using something like the following but I would really like to do all of this through powershell: %appcmd% set apppool "BusinessUserApps" /+recycling.periodicRestart.schedule.[value='01:00:00'] I have tried: $WebAppPool.recycling.periodicRestart.schedule = (New-TimeSpan -h 1 -m 30) This has the odd effect of turning the .schedule property into a timespan until I use $WebAppPool = get-item iis:\AppPools\AppPoolName to refresh the variable. There is also $WebappPool.recycling.periodicRestart.schedule.Collection but there is no add() function on the collection and I haven't found any other way to modify it. Does anyone know of a way I can set scheduled recycle times using syntax consistent with the code I have written above?

    Read the article

  • subneting .. is 192.168.0.1/24 different than 192.168.0.1/25

    - by SM
    So i am trying to understand subnetting. I understand that you can take 192.168.0.0/24 domain and break it up in 2 subnets. 192.168.0.0/25 and 192.168.0.128/25 .. so what if i need to create 2 more subnets inside both of those above subnets. What if i need to create 2 subnets, which have 2 more subnets inside them and each of those 2 subnets have 2 hosts. As per my calculation, this would be something like. top 2 subnets: 192.168.0.0/25 and 192.168.0.128/25 subnets of 192.168.0.0/25: 192.168.0.0/26 - 192.168.0.128/26 subnets of 192.168.0.128/25: 192.168.0.128/26 - 192.168.0.192/26 hosts of 192.168.0.0/25: 192.168.0.1/25 - 192.168.0.2/25 hosts of 192.168.0.128/25: 192.168.0.129/25 - 192.168.0.130/25 hosts of 192.168.0.0/26: 192.168.0.1/26 - 192.168.0.2/26 hosts of 192.168.0.128/26: 192.168.0.129/26 - 192.168.0.130/26 hosts of 192.168.0.128/26: 192.168.0.129/26 - 192.168.0.130/26 hosts of 192.168.0.192/26: 192.168.0.193/26 - 192.168.0.194/26 The above doesnt seem right, i am moving down like a tree, however the subnet mask and IPs are being repeated, i am just add 1 to /x and making the ips from there. Can anyone please tell me if this is correct The scenario i am trying to understand is similar to this, however i just want to add hosts on each level as well. http://www.ciscotrick.com/wp-content/uploads/2008/08/the_mathematical_relationship_between_network_block_sizes.jpg

    Read the article

  • ADFS 2.1 proxy trust establishment error

    - by Tommy Jakobsen
    I'm trying to install an ADFS proxy. In our intranet we have a ADFS 2.1 server running on Windows 2012 which is working fine. Now we're trying to deploy a proxy to this one for internet access, using Windows 2012 R2's Web Application Proxy. I'm getting the following error on the proxy server, event ID 393: Message : An error occurred while attempting to establish a trust relationship with the Federation Server. An error occurred when attempting to establish a trust relationship with the federation service. Error: Forbidden Context : DeploymentTask Status : Error I'm not getting any errors on the ADFS server. I've tried with different credentials. The ADFS service account, a domain administrator who is a member of the local administrators group on the ADFs server, and the local administrator account on the ADFS server. Same error message. Both port 80 and 443 is accessible from the proxy server to the internal ADFS server, and I can access the ADFS metadata endpoint from the proxy server. I'm using the same trusted SSL certificate (wildcard) on both machines. Do you have any ideas that can help me troubleshoot this problem?

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • Plesk SSL Certificate (Default cert when SSL enabled, CORRECT cert when SSL is disabled)

    - by hztetra
    I'm running Plesk 8.6.0: I have an SSL cert installed through Plesk's admin interface. But I have a bit of an issue: When I enabled SSL for the site, and selected my cert, then restart httpd, Plesk defaults to using my self-signed default certificate. Conversely, when I disable SSL support for the domain, all of a sudden Plesk is using my new SSL certificate. Unfortunately, when I try to view any folder on the site (mydomain.tld/folder) I'm simply met with a 404 (with files placed both in httpdocs and httpsdocs). I switch SSL support back on, and Plesk defaults back to the default self-signed cert and I can then view the folders that were not previously accessible. Any ideas? One further note: I tried following http://kb.parallels.com/en/939 . Once I tried to restart httpd with the edited ssl.conf file, I received an httpd could not start error. I restored the original ssl.conf file, and still received the could not start error. So as of now, I am running without an ssl.conf file. The following is the error I receive when I attempt to reintroduce ssl.conf: Starting httpd: [Mon Aug 23 15:45:40 2010] [warn] module ssl_module is already loaded, skipping (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs

    Read the article

  • Squid 2.7 Stable 8 on Windows 2008

    - by Sadish
    Hi all, I have a Windows 2008 SP2 Active Directory Domain, which has clients of Vista, Win 2000 XP and Windows 7 as members. I installed Squid 2.7 Stable 8 on Windows 2008 SP2 trying to configure NTLM based authentication when surfing Internet. Basically have defined 2 groups for internet allow and deny based on authorization Internet access is allowed. But after trying for over 3 weeks, seems that the authentication does not happen. The browser keeps on asking for user name & password. I would like to know if there is any solution for this. I’m totally frustrated and unable to move forward. My configuration as below from the modifying the default squid.conf Line 292 auth_param ntlm program c:/squid/libexec/mswin_ntlm_auth.exe auth_param ntlm children 5 Line 626 acl localnet proxy_auth REQUIRED src 10.0.0.1/255 acl InetAllow external win_domain_group InternetUsers acl InetDeny external win_domain_group InternetDenyGroup http_access allow InetAllow http_access deny InetDeny Comment any "acl localnet src" Line 294 external_acl_type win_domain_group ttl=120 %LOGIN c:/squid/libexec/mswin_check_lm_group.exe –G My Windows 2008 server is running on 192.168.0.203 and clients are of subnet 10.0.0.x for which I need authentication. Pls help !!!

    Read the article

  • Nginx as a proxy to Tomcat

    - by user36812
    Pardon me, this is my first attempt at Nginx-Jetty instead of Apache-JK-Tomcat. I deployed myapp.war file to $JETTY_HOME/webapps/, and the app is accessible at the url: http://myIP:8080/myapp I did a default installation of Nginx, and the default Nginx page is accessible at myIP Then, I modified the default domain under /etc/nginx/sites-enabled to the following: server { listen 80; server_name mydomain.com; access_log /var/log/nginx/localhost.access.log; location / { #root /var/www/nginx-default; #index index.html index.htm; proxy_pass http://127.0.0.1:8080/myapp/; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/nginx-default; } } Now I get the index page of mypp (running in jetty) when I hit myIP, which is good. But all the links are malformed. eg. The link to css is mydomain.com/myapp/css/style.css while what it should have been is mydomain.com/css/style.css. It seems to be mapping mydomain.com to 127.0.0.1:8080 instead of 127.0.0.1:8080/myapp/ Any idea what am missing? Do I need to change anything on the Jetty side too?

    Read the article

  • powershell v2 remoting - How do you enable unecrypted traffic

    - by Peter Walke
    I'm writing a powershell v2 script that I'd like to run against a remote server. When I run it, I get the error : Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Unencrypted traffic is currently disabled in the client configuration. Change the client configurati on and try the request again. For more information, see the about_ Remote_Troubleshooting Help topic. I looked at the online help for about _ Remote_Troubleshooting, but it didn't point me towards how to enable unecrypted traffic. Below is the script that I'm using that is causing me problems. Note: I have already run Enable-PSRemoting on the remote machine to allow it to accept incoming requests. I have tried to use a session option variable, but it doesn't seem to make any difference. $key = "HKLM:\SOFTWARE\Microsoft\PowerShell\1\ShellIds" Set-ItemProperty $key ConsolePrompting True $tvar = "password" $password = ConvertTo-SecureString -string $tvar -asPlainText –force $username="domain\username" $mySessionOption = New-PSSessionOption -NoEncryption $credential = New-Object System.Management.Automation.PSCredential($username,$password) invoke-command -filepath C:\scripts\RemoteScript.ps1 -sessionoption $mySessionOption -authentication digest -credential $credential -computername RemoteServer How do I enable unencrypted traffic?

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >