Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 497/537 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • Oracle Excellence Award

    - by Hartmut Wiese
    CALL FOR NOMINATIONS 2014 Oracle Excellence Award: Sustainability Innovation Is your organization using an Oracle product to help with a sustainability initiative while reducing costs? Saving energy? Saving gas? Saving paper? For example, you may use Oracle’s Agile Product Lifecycle Management to design more eco-friendly products, Oracle Transportation Management to reduce fleet emissions, Oracle Exadata Database Machine to decrease power and cooling needs while increasing database performance, Oracle Business Intelligence to measure environmental impacts, or one of many other Oracle products. Your organization may be eligible for the 2014 Oracle Excellence Award: Sustainability Innovation. Submit a nomination form located here by Friday June 20 if your company is using any Oracle product to take an environmental lead as well as to reduce costs and improve business efficiencies by using green business practices. These awards will be presented during Oracle OpenWorld 2014 (September 28-October 2) in San Francisco.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 About the Award • Winners will be selected from the customer and/or partner nominations. Either a customer, their partner, or Oracle representative can submit the nomination form on behalf of the customer.• There is a nomination form here to discuss your use of Oracle products and how they have helped your sustainability efforts and reduced costs. • Winners will be selected based on the extent of the environmental impact they have had as well as the business efficiencies they have achieved through their combined use of Oracle products. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Nomination Eligibility • Your company uses at least one component of Oracle products, whether it's the Oracle database, business applications, Fusion Middleware, or Sun servers/storage. • This solution should be in production or in active development. • Nomination deadline: Friday June 20, 2014. Benefits to Award Winners • Award presented to winners during Oracle OpenWorld by Jeff Henley, Oracle Chairman of the Board • Free Oracle OpenWorld registration pass for each winning customer • 2014 Oracle Excellence Award: Sustainability Innovation award logo for inclusion on your own website &/or press release • Possible placement in Oracle Profit Magazine &/or Oracle Magazine • ‘Enable the Eco-Enterprise’ podcast opportunity See last year's winners here Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 ______________________________________________________________________________________ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Questions? Send an email to: [email protected] Follow Oracle’s Sustainability Solutions on Twitter, LinkedIn, YouTube, and the Sustainability Matters blog Web page with award details:  http://www.oracle.com/us/products/applications/green/call-for-nominations-185050.html  

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • EPPM Is a Must-Have Capability as Global Energy and Power Industries Eye US$38 Trillion in New Investments

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “The process manufacturing industry is facing an unprecedented challenge: from now until 2035, cumulative worldwide investments of US$38 trillion will be required for drilling, power generation, and other energy projects,” Iain Graham, director of energy and process manufacturing for Oracle’s Primavera, said in a recent webcast. He adds that process manufacturing organizations such as oil and gas, utilities, and chemicals must manage this level of investment in an environment of constrained capital markets, erratic supply and demand, aging infrastructure, heightened regulations, and declining global skills. In the following interview, Graham explains how the right enterprise project portfolio management (EPPM) technology can help the industry meet these imperatives. Q: Why is EPPM so important for today’s process manufacturers? A: If the industry invests US$38 trillion without proper cost controls in place, a huge amount of resources will be put at risk, especially when it comes to cost overruns that may occur in large capital projects. Process manufacturing companies must not only control costs, but also monitor all the various contractors that will be involved in each project. If you’re not managing your own workers and all the interdependencies among the different contractors, then you’ve got problems. Q: What else should process manufacturers look for? A: It’s also important that an EPPM solution has the ability to manage more than just capital projects. For example, it’s best to manage maintenance and capital projects in the same system. Say you’re due to install a new transformer in a power station as part of a capital project, but routine maintenance in that area of the facility is scheduled for that morning. The lack of coordination could lead to unforeseen delays. There are also IT considerations that impact capital projects, such as adding servers and network cable for a control system in a power station. What organizations need is a true EPPM system that’s not just for capital projects, maintenance, or IT activities, but instead an enterprisewide solution that provides visibility into all types of projects. Read the complete Q&A here and discover the practical framework for successfully managing this massive capital spending.

    Read the article

  • ?????????????Fusion Middleware??????????? ?3?

    - by rika.tokumichi
    ??????????OTN????????? ??OTN???????Fusion Middleware???????????????????????????????????????????????? ?OTN????????????????????????????? 2010?2????????????????? ????????????????????????????? ????????????? ???! ????? ??????????????????????????? ?????????????????Middleware???????????????????(;´?`A`` ????????????????????????? ?Weblogic Server???? ???????? ??????????????????????? Java?????????????????????WebLogic Server, JRockit??????????????????????????????????????????????????????????? ???????????????????????????????????!! ??????????8?Weblogic Server???@????????????????????????????????????????????????????? >??8?Weblogic Server???@??? ??????????????????????????! >WebLogic Server??? 2009?????2010???? ???? ??????????????????????????????????? ????????????????????????^^ >[2009?12??]- 2009???? - WebLogic Server???????????! ???????????????????????????????????????????! ???~????????????????????? ???1????????????????????????????????????! >WebLogic -OTN???? ???????????- ???????(??): ?????????????WebLogic Server????????????? ????????????????????~Oracle WebLogic Server 11g~ Oracle WebLogic Server???????Web??????? -??? ???!!????????·???????!?~Oracle Weblogic Server ???~ >Middleware -OTN???? ???????????- ???????(??): Web?????????/????????????? - ??????????????????? - Oracle SOA Suite 11g ???? Oracle Coherence ???? Oracle Tuxedo ???? ??????????????Middleware??????????Twitter????????????????????????? oraclemiddle_jp ??????????! ???? ??????????????? ????????? ??????????OTN-Japan???????Oracle University????????????? WebLogic????? ------------------------------------------ Oracle WebLogic Server 10g System Administrator Certified Expert ?????????????????Web??????????????????WebLogic Server??????????????????????????????????????? Oracle WebLogic Server 10g Developer Certified Expert Servlet/JSP?EJB???J2EE????????Web???????????(IDE)????????????????????????MVC??????????????????????????????????Web?????????????????????????????????? >???????? ------------------------------------------ Oracle Application Server????? ------------------------------------------ ORACLE MASTER Application Server???????????????????????????????Oracle Application Servers??????????????????? ORACLE MASTER Silver Oracle Application Server 10g(OCA) ORACLE MASTER Gold Oracle Application Server 10g(OCP) >???????? ------------------------------------------ ?????? >ORACLE MASTER Oracle Middleware???? ?????????????????????????????????? ??????????????????????? ?????????????????????????????????????????????????????????????????3????????·???·????????????????????! ????????????????????????^^ >[2010?3??]????·???·??????????&????2010????! ???????????????????????????????????????????????? ??????????!????????????????????????????????????? ???OTN-Japan??????????????????????Twitter????????????????????????????! OTN Japan Twitter????????? ?????????????????????Fusion Middleware??????????? ?3??????????? ??????????????????????! Fusion Middleware???????????! ?INFORMATION INDEPTH NEWSLETTERS Fusion Middleware Edition???????! ??1?OTN???????Fusion Middleware???????????????? ???????????????????????????????????????????????? ???????????????????????!! ?????????? Oracle/OTN????????????????????????????????????????????????? ????????????????????????????????????? INFORMATION INDEPTH NEWSLETTERS Fusion Middleware Edition??????????????????????? >???????????????????????? ??????????????????????Oracle's Dev2DBA Newsletter?????????????^^ >?Oracle's Dev2DBA Newsletter?????????????????????????????????

    Read the article

  • tightvnc authentication failure

    - by broiyan
    When I run a tightvnc client to establish a VNC session I sometimes receive an error message that suggests there are repeated failed VNC login attempts or a brute force attack. The message dialog title is "unsupported security type" and the text content is "too many authentication failures, try another connection? yes/no". This problem goes away if I reboot the Ubuntu server and reload the VNC server program and try again. From that point, it will work for multiple VNC sessions. My VNC sessions are typically about 20 minutes. At some time in the future, the problem may recur so it seems correlated to the time the server has been up or the time tightvnc has been loaded. Typically it takes only a day or so before the problem comes back. I am using tightvnc 1.3 on an server running Ubuntu 12.04. The version of vncserver is rather dated because that seems to be all that is available from tightvnc for linux servers. On the client side I use the newest Java-based VNC client (version 2.5) for both Windows access and Ubuntu access. All my VNC sessions are via SSH. I am the only user and I will typically use only the same client computer. How can I stop this problem from recurring? Edit I found the log file. This is a small excerpt of what I am seeing. Essentially, various IPs, not my own, are attempting to connect. What is the practical solution for this? 05/06/12 20:07:32 Got connection from client 69.194.204.90 05/06/12 20:07:32 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:07:32 Too many authentication failures - client rejected 05/06/12 20:07:32 Client 69.194.204.90 gone 05/06/12 20:07:32 Statistics: 05/06/12 20:07:32 framebuffer updates 0, rectangles 0, bytes 0 05/06/12 20:24:56 Got connection from client 79.161.16.40 05/06/12 20:24:56 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:24:56 Too many authentication failures - client rejected 05/06/12 20:24:56 Client 79.161.16.40 gone 05/06/12 20:24:56 Statistics: 05/06/12 20:24:56 framebuffer updates 0, rectangles 0, bytes 0 05/06/12 20:29:27 Got connection from client 109.230.246.54 05/06/12 20:29:27 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:29:28 rfbVncAuthProcessResponse: authentication failed from 109.230.246.54 05/06/12 20:29:28 Client 109.230.246.54 gone 05/06/12 20:29:28 Statistics: 05/06/12 20:29:28 framebuffer updates 0, rectangles 0, bytes 0

    Read the article

  • Group Policy error 1006 with and error code 52

    - by Bernesto
    I have a hyper-v cluster operating on win2k8 R2 in a 2003 forest. These servers are at our NOC with a DC that connects to our PDC at HQ via a persistent VPN. The cluster boxes are reporting a error event ID 1006 shown below. The DC is also reporting an error 5805 also shown below. I have found numorus posts regarding 1006 errors, but none for error ID 52's. It's weird, I can ping and I can browse network shares on the DC from each. I thought maybe a DNS or net work issue, but nslook up works too. Event 1006 <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-GroupPolicy" Guid="{AEA1B4FA-97D1-45F2-A64C-4D69FFFD92C9}" /> <EventID>1006</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>1</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2013-12-17T00:08:19.582292600Z" /> <EventRecordID>41808</EventRecordID> <Correlation ActivityID="{26B10592-6228-4A3E-845B-E04B49702A54}" /> <Execution ProcessID="964" ThreadID="1384" /> <Channel>System</Channel> <Computer>NEOREEFVH1.neoreef.com</Computer> <Security UserID="S-1-5-18" /> </System> <EventData> <Data Name="SupportInfo1">1</Data> <Data Name="SupportInfo2">5012</Data> <Data Name="ProcessingMode">0</Data> <Data Name="ProcessingTimeInMilliseconds">1138</Data> <Data Name="ErrorCode">52</Data> <Data Name="ErrorDescription">Unavailable</Data> <Data Name="DCName" /> </EventData> </Event> Event 5805 Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 5805 Date: 12/16/2013 Time: 2:32:01 PM User: N/A Computer: NEOREEFSRV15 Description: The session setup from the computer NEOREEFVH3 failed to authenticate. The following error occurred: Access is denied. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 22 00 00 c0 "..À Here are the networks on the hosts: Any with a "Enabled" Are virtual switches.

    Read the article

  • Time Service will not start on Windows Server - System error 1290

    - by paradroid
    I have been trying to sort out some time sync issues involving two domain controllers and seem to have ended up with a bigger problem. It's horrible. They are both virtual machines (one being on Amazon EC2), which I think may complicate things regarding time servers. The primary DC with all the FSMO roles is on the LAN. I reset its time server configuration like this (from memory): net stop w32time w23tm /unregister shutdown /r /t 0 w32tm /register w32tm /config /manualpeerlist:”0.uk.pool.ntp.org,1.uk.pool.ntp.org,2.uk.pool.ntp.org,3.uk.pool.ntp.org” /syncfromflags:manual /reliable:yes /update W32tm /config /update net start w32time reg QUERY HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config /v AnnounceFlags I checked to see if it was set to 0x05, which it was. The output for... w32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 10/04/2012 15:03:27 Source: Local CMOS Clock Poll Interval: 6 (64s) While this was not what was intended, I thought I would sort it out after I made sure that the remote DC was syncing with it first. On the Amazon EC2 remote replica DC (Windows Server 2008 R2 Core)... net stop w32time w32tm /unregister shutdown /r /t 0 w32time /register net start w32time This is where it all goes wrong System error 1290 has occurred. The service start failed since one or more services in the same process have an incompatible service SID type setting. A service with restricted service SID type can only coexist in the same process with other services with a restricted SID type. If the service SID type for this service was just configured, the hosting process must be restarted in order to start this service. I cannot get the w32time service to start. I've tried resetting the time settings and tried to reverse what I have done. The Ec2Config service cannot start either, as it depends on the w32time service. All the solutions I have seen involve going into the telephony service registry settings, but as it is Server Core, it does not have that role, and I cannot see the relationship between that and the time service. w32time runs in the LocalService group and this telephony service which does not exist on Core runs in the NetworkService group. Could this have something to do with the process (svchost.exe) not being able to be run as a domain account, as it now a domain controller, but originally it ran as a local user group, or something like that? There seem to be a lot of cases of people having this problem, but the only solution has to do with the (non-existant on Core) telephony service. Who even uses that?

    Read the article

  • System Account Logon Failures ever 30 seconds

    - by floyd
    We have two Windows 2008 R2 SP1 servers running in a SQL failover cluster. On one of them we are getting the following events in the security log every 30 seconds. The parts that are blank are actually blank. Has anyone seen similar issues, or assist in tracking down the cause of these events? No other event logs show anything relevant that I can tell. Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 10/17/2012 10:02:04 PM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: SERVERNAME.domainname.local Description: An account failed to log on. Subject: Security ID: SYSTEM Account Name: SERVERNAME$ Account Domain: DOMAINNAME Logon ID: 0x3e7 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Account Domain: Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x238 Caller Process Name: C:\Windows\System32\lsass.exe Network Information: Workstation Name: SERVERNAME Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Schannel Authentication Package: Kerberos Transited Services: - Package Name (NTLM only): - Key Length: 0 Second event which follows every one of the above events Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 10/17/2012 10:02:04 PM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: SERVERNAME.domainname.local Description: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Account Domain: Failure Information: Failure Reason: An Error occured during Logon. Status: 0xc000006d Sub Status: 0x80090325 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: - Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Schannel Authentication Package: Microsoft Unified Security Protocol Provider Transited Services: - Package Name (NTLM only): - Key Length: 0 EDIT UPDATE: I have a bit more information to add. I installed Network Monitor on this machine and did a filter for Kerberos traffic and found the following which corresponds to the timestamps in the security audit log. A Kerberos AS_Request Cname: CN=SQLInstanceName Realm:domain.local Sname krbtgt/domain.local Reply from DC: KRB_ERROR: KDC_ERR_C_PRINCIPAL_UNKOWN I then checked the security audit logs of the DC which responded and found the following: A Kerberos authentication ticket (TGT) was requested. Account Information: Account Name: X509N:<S>CN=SQLInstanceName Supplied Realm Name: domain.local User ID: NULL SID Service Information: Service Name: krbtgt/domain.local Service ID: NULL SID Network Information: Client Address: ::ffff:10.240.42.101 Client Port: 58207 Additional Information: Ticket Options: 0x40810010 Result Code: 0x6 Ticket Encryption Type: 0xffffffff Pre-Authentication Type: - Certificate Information: Certificate Issuer Name: Certificate Serial Number: Certificate Thumbprint: So appears to be related to a certificate installed on the SQL machine, still dont have any clue why or whats wrong with said certificate. It's not expired etc.

    Read the article

  • pfsense peer-to-peer OpenVPN not connecting

    - by John P
    I'm trying to setup a peer-to-peer OpenVPN between two pfsense servers running 2.0.1-RELEASE, but the client keeps getting the connection dropped, with a status of "reconnecting; ping-restart" and nothing appears to be routing between them. Both these firewalls are also doing PPTP VPNs that are working correctly. FW01 ("server") ======================= LAN: 10.1.1.2/24 WAN: xx.xx.126.34/27 ServerMode: Peer to Peer (Shared Key) Protocol: UDP DeviceMode: tun Interface: WAN Port 1194 Tunnel: 10.0.8.1/30 Local Network: 10.1.1.0/24 Remote Network: 192.168.1.0/24 Firewall Rule in OpenVPN tab: UDP * * * * * none FW03 (client) LAN: 192.168.1.2/24 WAN: xx.xx.9.66/27 ServerMode: Peer to Peer (Shared Key) Protocol: UDP DeviceMode: tun Interface: WAN Server Host: xx.xx.126.34 Tunnel: -- also tried 10.1.8.0/24 Remote Network: 10.1.1.0/24 Client Logs: System Log Apr 6 18:00:08 kernel: ... Restarting packages. Apr 6 18:00:13 check_reload_status: Starting packages Apr 6 18:00:19 php: : Restarting/Starting all packages. Apr 6 18:00:56 kernel: ovpnc1: link state changed to DOWN Apr 6 18:00:56 check_reload_status: Reloading filter Apr 6 18:00:57 check_reload_status: Reloading filter Apr 6 18:00:57 kernel: ovpnc1: link state changed to UP Apr 6 18:00:57 check_reload_status: rc.newwanip starting ovpnc1 Apr 6 18:00:57 check_reload_status: Syncing firewall Apr 6 18:01:02 php: : rc.newwanip: Informational is starting ovpnc1. Apr 6 18:01:02 php: : rc.newwanip: on (IP address: ) (interface: ) (real interface: ovpnc1). Apr 6 18:01:02 php: : rc.newwanip: Failed to update IP, restarting... Apr 6 18:01:02 php: : send_event: sent interface reconfigure got ERROR: incomplete command. all reload reconfigure restart newip linkup sync Client OpenVPN log Apr 6 18:39:14 openvpn[12177]: Inactivity timeout (--ping-restart), restarting Apr 6 18:39:14 openvpn[12177]: SIGUSR1[soft,ping-restart] received, process restarting Apr 6 18:39:16 openvpn[12177]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Apr 6 18:39:16 openvpn[12177]: Re-using pre-shared static key Apr 6 18:39:16 openvpn[12177]: Preserving previous TUN/TAP instance: ovpnc1 Apr 6 18:39:16 openvpn[12177]: UDPv4 link local (bound): [AF_INET]64.94.9.66 Apr 6 18:39:16 openvpn[12177]: UDPv4 link remote: [AF_INET]64.74.126.34:1194 Server OpenVPN log Apr 6 14:40:36 openvpn[22117]: UDPv4 link remote: [undef] Apr 6 14:40:36 openvpn[22117]: UDPv4 link local (bound): [AF_INET]xx.xx.126.34:1194 Apr 6 14:40:36 openvpn[21006]: /usr/local/sbin/ovpn-linkup ovpns1 1500 1557 10.1.8.1 10.1.8.2 init Apr 6 14:40:36 openvpn[21006]: /sbin/ifconfig ovpns1 10.1.8.1 10.1.8.2 mtu 1500 netmask 255.255.255.255 up Apr 6 14:40:36 openvpn[21006]: do_ifconfig, tt-ipv6=0, tt-did_ifconfig_ipv6_setup=0 Apr 6 14:40:36 openvpn[21006]: TUN/TAP device /dev/tun1 opened Apr 6 14:40:36 openvpn[21006]: Control Channel Authentication: using '/var/etc/openvpn/server1.tls-auth' as a OpenVPN static key file Apr 6 14:40:36 openvpn[21006]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Apr 6 14:40:36 openvpn[21006]: OpenVPN 2.2.0 amd64-portbld-freebsd8.1 [SSL] [LZO2] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Aug 11 2011 Apr 6 14:40:36 openvpn[17171]: SIGTERM[hard,] received, process exiting Apr 6 14:40:36 openvpn[17171]: /usr/local/sbin/ovpn-linkdown ovpns1 1500 1557 10.1.8.1 10.1.8.2 init Apr 6 14:40:36 openvpn[17171]: ERROR: FreeBSD route delete command failed: external program exited with error status: 1 Apr 6 14:40:36 openvpn[17171]: event_wait : Interrupted system call (code=4) Apr 6 14:06:32 openvpn[17171]: Initialization Sequence Completed Apr 6 14:06:32 openvpn[17171]: UDPv4 link remote: [undef] Apr 6 14:06:32 openvpn[17171]: UDPv4 link local (bound): [AF_INET]xx.xx.126.34:1194

    Read the article

  • Windows Server 2003 Terminal Server does not give out all available licenses

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups. Update (4/12/10): The server has changed the entries in the Terminal Server Licensing a few times. After installing the licenses it added an entry of which the exact phrasing I forgot but it was about temporary Windows 2003 device licenses. Later it added Windows Server 2003 - TS Per Device CAL. The temporary held 2 licenses (standard RDP licenses, I think) and the other 10. At some point, seemingly unrelated from the testing we did, it used a licenses from the new pool. This morning, 2 licenses were used from the pool of 10 and only 1 from the temporary/RDP pool (I wish I had screenshots to show, it changed every few hours oir so it seems). Although I had already activated the server over the internet, and re-activated it, I decided to go through the whole procedure by phone. Long story short, here is what it says now: Existing Windows 2000 Server, type:built-in [no licenses used, I add for for sake of being complete] Windows Server 2003 - Terminal Server Per Device CAL Token, type:open [none of 10 used] Windows Server 2003 - TS Per Device CAL, type:open [3 of 10 used] As I tried to explain, this is the end result after a few changes, most of which I can't directly connect to any action from my part. Only going to the activation procedure by phone seemed to directly effect the TS, resulting in the above configuration. Still, it is impossible to connect with more than 3 people, which is 1 up from the 2 that could connect yesterday. TS does say 7 licenses are avaible. Yet it won't give them out.

    Read the article

  • L2TP iptables port forward

    - by The_cobra666
    Hi all, I'm setting up port forwarding for an L2TP VPN connection to the local Windows 2003 VPN server. The router is a simpel Debian machine with iptables. The VPN server works perfect. But I cannot log in from the WAN. I'm missing something. The VPN server is using a pre-shared key (L2TP) and give's out an IP in the range: 192.168.3.0. Local network range is 192.168.2.0/24 I added the route: with route add -net 192.168.3.0 netmask 255.255.255.240 gw 192.168.2.13 (the vpn server) iptables -t nat -A PREROUTING -p udp --dport 1701 -i eth0 -j DNAT --to 192.168.2.13 iptables -A FORWARD -p udp --dport 1701 -j ACCEPT iptables -t nat -A PREROUTING -p udp --dport 500 -i eth0 -j DNAT --to 192.168.2.13 iptables -A FORWARD -p udp --dport 500 -j ACCEPT iptables -t nat -A PREROUTING -p udp --dport 4500 -i eth0 -j DNAT --to 192.168.2.13 iptables -A FORWARD -p udp --dport 4500 -j ACCEPT iptables -t nat -A PREROUTING -p 50 -j DNAT --to 192.168.2.13 iptables -A FORWARD -p 50 -j ACCEPT iptables -t nat -A PREROUTING -p 51 -j DNAT --to 192.168.2.13 iptables -A FORWARD -p 51 -j ACCEPT The whole iptables script is (without the line's from above): echo 1 > /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/tcp_syncookies #Flush table's iptables -F INPUT iptables -F OUTPUT iptables -F FORWARD iptables -t nat -F #Drop traffic iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT ACCEPT #verkeer naar buiten toe laten en nat aanzetten iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE #RDP forward voor windows servers iptables -t nat -A PREROUTING -p tcp --dport 3389 -i eth0 -j DNAT --to 192.168.2.10:3389 iptables -A FORWARD -p tcp --dport 3389 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 3340 -i eth0 -j DNAT --to 192.168.2.12:3340 iptables -A FORWARD -p tcp --dport 3340 -j ACCEPT #toestaan SSH verkeer iptables -t nat -A PREROUTING -p tcp --dport 22 -i eth0 -j DNAT --to-destination 192.168.2.1 iptables -A INPUT -p tcp --dport 22 -j ACCEPT #toestaan verkeer loopback iptables -A INPUT -i lo -j ACCEPT #toestaan lokaal netwerk iptables -A INPUT -i eth1 -j ACCEPT #accepteren established traffic iptables -A INPUT -i eth0 --match state --state RELATED,ESTABLISHED -j ACCEPT #droppen ICMP boodschappen iptables -A INPUT -p icmp -i eth0 -m limit --limit 10/minute -j ACCEPT iptables -A INPUT -p icmp -i eth0 -j REJECT ifconfig eth1 192.168.2.1/24 ifconfig eth0 XXXXXXXXXXXXX/30 ifconfig eth0 up ifconfig eth1 up route add default gw XXXXXXXXXXXXXXXXXXX route add -net 192.168.3.0 netmask 255.255.255.240 gw 192.168.2.13

    Read the article

  • nginx proxying websockets, must be missing something

    - by CodeMonkey
    I have a basic chat app written in node.js using express and socket.io; it works fine when connecting directly to node on port 3000 But doesn't work when I try to use nginx v1.4.2 as a proxy. I start off using the connection map map $http_upgrade $connection_upgrade { default upgrade; '' close; } Then add the locations location /socket.io/ { proxy_pass http://node; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Request-Id $txid; proxy_set_header X-Session-Id $uid_set+$uid_got; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_buffering off; proxy_read_timeout 86400; keepalive_timeout 90; proxy_cache off; access_log /var/log/nginx/webservice.access.log; error_log /var/log/nginx/webservice.error.log; } location /web-service/ { proxy_pass http://node; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Request-Id $txid; proxy_set_header X-Session-Id $uid_set+$uid_got; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_buffering off; proxy_read_timeout 86400; keepalive_timeout 90; access_log /var/log/nginx/webservice.access.log; error_log /var/log/nginx/webservice.error.log; rewrite /web-service/(.*) /$1 break; proxy_cache off; } These are built up using all of the tips to get it working that I could find. The error log does not show any errors. (except when I stop node to test the error logging is working) When through nginx I do see a websocket connection in the dev tools, with the status of 101; but the frames tab under the resuects is empty. The only differnece I can see in the response headers is a case difference - "upgrade" vs "Upgrade" - through nginx : Connection:upgrade Date:Fri, 08 Nov 2013 11:49:25 GMT Sec-WebSocket-Accept:LGB+iEBb8Ql9zYfqNfuuXzdzjgg= Server:nginx/1.4.2 Upgrade:websocket direct from node Connection:Upgrade Sec-WebSocket-Accept:8nwPpvg+4wKMOyQBEvxWXutd8YY= Upgrade:websocket output from node (when used through nginx) debug - served static content /socket.io.js debug - client authorized info - handshake authorized iaej2VQlsbLFIhachyb1 debug - setting request GET /socket.io/1/websocket/iaej2VQlsbLFIhachyb1 debug - set heartbeat interval for client iaej2VQlsbLFIhachyb1 debug - client authorized for debug - websocket writing 1:: debug - websocket writing 5:::{"name":"message","args":[{"message":"welcome to the chat"}]} debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client 7My3F4CuvZC0I4Olhybz debug - jsonppolling closed due to exceeded duration debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client AkCYl0nWNZAHeyUihyb0 debug - jsonppolling closed due to exceeded duration debug - setting request GET /socket.io/1/xhr-polling/iaej2VQlsbLFIhachyb1?t=1383911206158 debug - setting poll timeout debug - discarding transport debug - cleared heartbeat interval for client iaej2VQlsbLFIhachyb1 debug - setting request GET /socket.io/1/jsonp-polling/iaej2VQlsbLFIhachyb1?t=1383911216160&i=0 debug - setting poll timeout debug - discarding transport debug - clearing poll timeout debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client iaej2VQlsbLFIhachyb1 debug - jsonppolling closed due to exceeded duration debug - setting request GET /socket.io/1/jsonp-polling/iaej2VQlsbLFIhachyb1?t=1383911236429&i=0 debug - setting poll timeout debug - discarding transport debug - cleared close timeout for client iaej2VQlsbLFIhachyb1 when direct to node, the client does not start polling. The normal http stuff node outputs works fine with nginx. Clearly something I am not seeing, but I am stuck, thanks :)

    Read the article

  • SQL Server Agent was not running on Server Dynamics CRM 2013

    - by No1_Melman
    I'm trying to install Dynamics CRM 2013 on a server. This server is on a VM. There are several other VMs, an ADDS & DNS, a MSSQL and a WebServer VM. Each server is a Windows Server 2012 R2. The SQL Server is 2012 Enterprise. Each VM is part of the main Domain, set by the ADDS & DNS. NSLookup confirms I can see the computer at the right IP address. Each separate VM has its own static IP, the DNS is set to the ADDS & DNS. I use the domain administrator to log into all the servers, and make the that domain administrator a local administrator. I've set up all the domain users for the CRM and gave them appropriate permissions, I have also added the accounts to the appropriate places, such that the CRM Deployment user is in the SQL security. The SQL Agent is running. SQL server configuration manager has SQL server network configuration TCP/IP enabled to allow remote connections. The SQL server has the domain user as a administrator, which is the same user being used to install the CRM. In the CRM setup i point to the [Servername]\[Instance] and I have also tried just the [Servername]. to make this easier I called the server MSSQL and left the instance name to the default. I even install the MSSQL instance as the domain administrator. CRM can find the ReportServer url. I have enable all the ports required, including: 135, 1433, 1434, 2382, 2383, 4022. 1434 UDP. I feel like I have absolutely done everything, I have google many times and tried all the different methods, and for the life of me I cant seem to get the CRM setup to find the SQL server agent. It passes everything else perfectly fine. I can even ping the MSSQL server. What is the problem, why does the CRM still keep giving the error: SQLSERVERAGENT (SQLSERVERAGENT) service is not running on the server MSSQL On the MSSQL server, the name of the sql server agent service is: SQL Server Agent (MSSQLSERVER)

    Read the article

  • Linux Mint 10 LXDE computer to act as LTSP Server without luck

    - by Rautamiekka
    So I've tried to make our screen-broken HP laptop to also serve as LTSP Server in addition to various other tasks, without luck, which may be cuz I'm running LM10 LXDE while the instructions are for Ubuntu. Excuse my ignorance. The entire output from Terminal after installing LTSP stuff along with a Server kernel and a load of other packages: administrator@rauta-mint-turion ~ $ sudo lt ltrace ltsp-build-client ltsp-chroot ltspfs ltspfsmounter ltsp-info ltsp-localapps ltsp-update-image ltsp-update-kernels ltsp-update-sshkeys administrator@rauta-mint-turion ~ $ sudo ltsp-update-sshkeys administrator@rauta-mint-turion ~ $ sudo ltsp-update-kernels find: `/opt/ltsp/': No such file or directory administrator@rauta-mint-turion ~ $ sudo ltsp-build-client /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging: line 3: /opt/ltsp/i386/etc/ltsp_chroot: No such file or directory error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ sudo ltsp-update-image Cannot determine assigned port. Assigning to port 2000. mkdir: cannot create directory `/opt/ltsp/i386/etc/ltsp': No such file or directory /usr/sbin/ltsp-update-image: 274: cannot create /opt/ltsp/i386/etc/ltsp/update-kernels.conf: Directory nonexistent /usr/sbin/ltsp-update-image: 274: cannot create /opt/ltsp/i386/etc/ltsp/update-kernels.conf: Directory nonexistent Regenerating kernel... chroot: failed to run command `/usr/share/ltsp/update-kernels': No such file or directory Done. Configuring inetd... Done. Updating pxelinux default configuration...Done. Skipping invalid chroot: /opt/ltsp/i386 chroot: failed to run command `test': No such file or directory administrator@rauta-mint-turion ~ $ sudo ltsp-chroot chroot: failed to run command `/bin/bash': No such file or directory administrator@rauta-mint-turion ~ $ bash administrator@rauta-mint-turion ~ $ exit exit administrator@rauta-mint-turion ~ $ sudo ls /opt/ltsp i386 administrator@rauta-mint-turion ~ $ sudo ls /opt/ltsp/i386/ administrator@rauta-mint-turion ~ $ sudo ltsp-build-client NOTE: Root directory /opt/ltsp/i386 already exists, this will lead to problems, please remove it before trying again. Exiting. error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ sudo rm -rv /opt/ltsp/i386 removed directory: `/opt/ltsp/i386' administrator@rauta-mint-turion ~ $ sudo ltsp-build-client /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging: line 3: /opt/ltsp/i386/etc/ltsp_chroot: No such file or directory error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ aptitude search ltsp p fts-ltsp-ldap - LDAP LTSP module for the TFTP/Fuse supplicant p ltsp-client - LTSP client environment p ltsp-client-core - LTSP client environment (core) p ltsp-cluster-accountmanager - Account creation and management daemon for LTSP p ltsp-cluster-control - Web based thin-client configuration management p ltsp-cluster-lbagent - LTSP loadbalancer agent offers variables about the state of the ltsp server p ltsp-cluster-lbserver - LTSP loadbalancer server returns the optimal ltsp server to terminal p ltsp-cluster-nxloadbalancer - Minimal NX loadbalancer for ltsp-cluster p ltsp-cluster-pxeconfig - LTSP-Cluster symlink generator p ltsp-controlaula - Classroom management tool with ltsp clients p ltsp-docs - LTSP Documentation p ltsp-livecd - starts an LTSP live server on an Ubuntu livecd session p ltsp-manager - Ubuntu LTSP server management GUI i A ltsp-server - Basic LTSP server environment i ltsp-server-standalone - Complete LTSP server environment i A ltspfs - Fuse based remote filesystem for LTSP thin clients p ltspfsd - Fuse based remote filesystem hooks for LTSP thin clients p ltspfsd-core - Fuse based remote filesystem daemon for LTSP thin clients p python-ltsp - provides ltsp related functions administrator@rauta-mint-turion ~ $ sudo aptitude purge ltsp-server ltsp-server-standalone ltspfs The following packages will be REMOVED: debconf-utils{u} debootstrap{u} dhcp3-server{u} gstreamer0.10-pulseaudio{u} ldm-server{u} libpulse-browse0{u} ltsp-server{p} ltsp-server-standalone{p} ltspfs{p} nbd-server{u} openbsd-inetd{u} pulseaudio{u} pulseaudio-esound-compat{u} pulseaudio-module-x11{u} pulseaudio-utils{u} squashfs-tools{u} tftpd-hpa{u} 0 packages upgraded, 0 newly installed, 17 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 6,996kB will be freed. Do you want to continue? [Y/n/?] (Reading database ... 158454 files and directories currently installed.) Removing ltsp-server-standalone ... Purging configuration files for ltsp-server-standalone ... Removing ltsp-server ... Purging configuration files for ltsp-server ... dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot/ltsp' not empty so not removed. dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot' not empty so not removed. Processing triggers for man-db ... (Reading database ... 158195 files and directories currently installed.) Removing debconf-utils ... Removing debootstrap ... Removing dhcp3-server ... * Stopping DHCP server dhcpd3 [ OK ] Removing gstreamer0.10-pulseaudio ... Removing ldm-server ... Removing pulseaudio-module-x11 ... Removing pulseaudio-esound-compat ... Removing pulseaudio ... * PulseAudio configured for per-user sessions Removing pulseaudio-utils ... Removing libpulse-browse0 ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for libc-bin ... ldconfig deferred processing now taking place (Reading database ... 157944 files and directories currently installed.) Removing ltspfs ... Processing triggers for man-db ... (Reading database ... 157932 files and directories currently installed.) Removing nbd-server ... Stopping Network Block Device server: nbd-server. Removing openbsd-inetd ... * Stopping internet superserver inetd [ OK ] Removing squashfs-tools ... Removing tftpd-hpa ... tftpd-hpa stop/waiting Processing triggers for ureadahead ... Processing triggers for man-db ... administrator@rauta-mint-turion ~ $ sudo aptitude purge ~c The following packages will be REMOVED: dhcp3-server{p} libpulse-browse0{p} nbd-server{p} openbsd-inetd{p} pulseaudio{p} tftpd-hpa{p} 0 packages upgraded, 0 newly installed, 6 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Do you want to continue? [Y/n/?] (Reading database ... 157881 files and directories currently installed.) Removing dhcp3-server ... Purging configuration files for dhcp3-server ... Removing libpulse-browse0 ... Purging configuration files for libpulse-browse0 ... Removing nbd-server ... Purging configuration files for nbd-server ... Removing openbsd-inetd ... Purging configuration files for openbsd-inetd ... Removing pulseaudio ... Purging configuration files for pulseaudio ... Removing tftpd-hpa ... Purging configuration files for tftpd-hpa ... Processing triggers for ureadahead ... administrator@rauta-mint-turion ~ $ sudo aptitude install ltsp-server-standalone The following NEW packages will be installed: debconf-utils{a} debootstrap{a} ldm-server{a} ltsp-server{a} ltsp-server-standalone ltspfs{a} nbd-server{a} openbsd-inetd{a} squashfs-tools{a} The following packages are RECOMMENDED but will NOT be installed: dhcp3-server pulseaudio-esound-compat tftpd-hpa 0 packages upgraded, 9 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/498kB of archives. After unpacking 2,437kB will be used. Do you want to continue? [Y/n/?] Preconfiguring packages ... Selecting previously deselected package openbsd-inetd. (Reading database ... 157868 files and directories currently installed.) Unpacking openbsd-inetd (from .../openbsd-inetd_0.20080125-4ubuntu2_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... Setting up openbsd-inetd (0.20080125-4ubuntu2) ... * Stopping internet superserver inetd [ OK ] * Starting internet superserver inetd [ OK ] Selecting previously deselected package ldm-server. (Reading database ... 157877 files and directories currently installed.) Unpacking ldm-server (from .../ldm-server_2%3a2.1.3-0ubuntu1_all.deb) ... Selecting previously deselected package debconf-utils. Unpacking debconf-utils (from .../debconf-utils_1.5.32ubuntu3_all.deb) ... Selecting previously deselected package debootstrap. Unpacking debootstrap (from .../debootstrap_1.0.23ubuntu1_all.deb) ... Selecting previously deselected package nbd-server. Unpacking nbd-server (from .../nbd-server_1%3a2.9.14-2ubuntu1_i386.deb) ... Selecting previously deselected package squashfs-tools. Unpacking squashfs-tools (from .../squashfs-tools_1%3a4.0-8_i386.deb) ... Selecting previously deselected package ltsp-server. GNU nano 2.2.4 File: /etc/ltsp/ltsp-update-image.conf # Configuration file for ltsp-update-image # By default, do not compress the image # as it's reported to make it unstable NO_COMP="-noF -noD -noI -no-exports" [ Switched to /etc/ltsp/ltsp-update-image.conf ] administrator@rauta-mint-turion ~ $ ls /opt/ firefox/ ltsp/ mint-flashplugin/ administrator@rauta-mint-turion ~ $ ls /opt/ltsp/i386/ administrator@rauta-mint-turion ~ $ ls /opt/ltsp/ i386 administrator@rauta-mint-turion ~ $ sudo ltsp ltsp-build-client ltsp-chroot ltspfs ltspfsmounter ltsp-info ltsp-localapps ltsp-update-image ltsp-update-kernels ltsp-update-sshkeys administrator@rauta-mint-turion ~ $ sudo ltsp-build-client NOTE: Root directory /opt/ltsp/i386 already exists, this will lead to problems, please remove it before trying again. Exiting. error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ ^C administrator@rauta-mint-turion ~ $ sudo aptitude purge ltsp ltspfs ltsp-server ltsp-server-standalone administrator@rauta-mint-turion ~ $ sudo aptitude purge ltsp-server The following packages will be REMOVED: ltsp-server{p} 0 packages upgraded, 0 newly installed, 1 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 1,073kB will be freed. The following packages have unmet dependencies: ltsp-server-standalone: Depends: ltsp-server but it is not going to be installed. The following actions will resolve these dependencies: Remove the following packages: 1) ltsp-server-standalone Accept this solution? [Y/n/q/?] The following packages will be REMOVED: debconf-utils{u} debootstrap{u} ldm-server{u} ltsp-server{p} ltsp-server-standalone{a} ltspfs{u} nbd-server{u} openbsd-inetd{u} squashfs-tools{u} 0 packages upgraded, 0 newly installed, 9 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 2,437kB will be freed. Do you want to continue? [Y/n/?] (Reading database ... 158244 files and directories currently installed.) Removing ltsp-server-standalone ... (Reading database ... 158240 files and directories currently installed.) Removing ltsp-server ... Purging configuration files for ltsp-server ... dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot/ltsp' not empty so not removed. dpkg: warning: while removing ltsp-server, directory '/var/lib/tftpboot' not empty so not removed. Processing triggers for man-db ... (Reading database ... 157987 files and directories currently installed.) Removing debconf-utils ... Removing debootstrap ... Removing ldm-server ... Removing ltspfs ... Removing nbd-server ... Stopping Network Block Device server: nbd-server. Removing openbsd-inetd ... * Stopping internet superserver inetd [ OK ] Removing squashfs-tools ... Processing triggers for man-db ... Processing triggers for ureadahead ... administrator@rauta-mint-turion ~ $ sudo aptitude purge ~c The following packages will be REMOVED: ltsp-server-standalone{p} nbd-server{p} openbsd-inetd{p} 0 packages upgraded, 0 newly installed, 3 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Do you want to continue? [Y/n/?] (Reading database ... 157871 files and directories currently installed.) Removing ltsp-server-standalone ... Purging configuration files for ltsp-server-standalone ... Removing nbd-server ... Purging configuration files for nbd-server ... Removing openbsd-inetd ... Purging configuration files for openbsd-inetd ... Processing triggers for ureadahead ... administrator@rauta-mint-turion ~ $ sudo rm -rv /var/lib/t teamspeak-server/ tftpboot/ transmission-daemon/ administrator@rauta-mint-turion ~ $ sudo rm -rv /var/lib/tftpboot removed `/var/lib/tftpboot/ltsp/i386/pxelinux.cfg/default' removed directory: `/var/lib/tftpboot/ltsp/i386/pxelinux.cfg' removed directory: `/var/lib/tftpboot/ltsp/i386' removed directory: `/var/lib/tftpboot/ltsp' removed directory: `/var/lib/tftpboot' administrator@rauta-mint-turion ~ $ sudo find / -name "ltsp" /opt/ltsp administrator@rauta-mint-turion ~ $ sudo rm -rv /opt/ltsp removed directory: `/opt/ltsp/i386' removed directory: `/opt/ltsp' administrator@rauta-mint-turion ~ $ sudo aptitude install ltsp-server-standalone The following NEW packages will be installed: debconf-utils{a} debootstrap{a} ldm-server{a} ltsp-server{a} ltsp-server-standalone ltspfs{a} nbd-server{a} openbsd-inetd{a} squashfs-tools{a} The following packages are RECOMMENDED but will NOT be installed: dhcp3-server pulseaudio-esound-compat tftpd-hpa 0 packages upgraded, 9 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/498kB of archives. After unpacking 2,437kB will be used. Do you want to continue? [Y/n/?] Preconfiguring packages ... Selecting previously deselected package openbsd-inetd. (Reading database ... 157868 files and directories currently installed.) Unpacking openbsd-inetd (from .../openbsd-inetd_0.20080125-4ubuntu2_i386.deb) ... Processing triggers for man-db ... GNU nano 2.2.4 New Buffer GNU nano 2.2.4 File: /etc/ltsp/dhcpd.conf # # Default LTSP dhcpd.conf config file. # authoritative; subnet 192.168.2.0 netmask 255.255.255.0 { range 192.168.2.70 192.168.2.79; option domain-name "jarvinen"; option domain-name-servers 192.168.2.254; option broadcast-address 192.168.2.255; option routers 192.168.2.254; # next-server 192.168.0.1; # get-lease-hostnames true; option subnet-mask 255.255.255.0; option root-path "/opt/ltsp/i386"; if substring( option vendor-class-identifier, 0, 9 ) = "PXEClient" { filename "/ltsp/i386/pxelinux.0"; } else { filename "/ltsp/i386/nbi.img"; } } [ Wrote 22 lines ] administrator@rauta-mint-turion ~ $ sudo service dhcp3-server start * Starting DHCP server dhcpd3 [ OK ] administrator@rauta-mint-turion ~ $ sudo ltsp-build-client /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging: line 3: /opt/ltsp/i386/etc/ltsp_chroot: No such file or directory error: LTSP client installation ended abnormally administrator@rauta-mint-turion ~ $ sudo cat /usr/share/ltsp/plugins/ltsp-build-client/common/010-chroot-tagging case "$MODE" in after-install) echo LTSP_CHROOT=$ROOT >> $ROOT/etc/ltsp_chroot ;; esac administrator@rauta-mint-turion ~ $ cd $ROOT/etc/ltsp_chroot bash: cd: /etc/ltsp_chroot: No such file or directory administrator@rauta-mint-turion ~ $ cd $ROOT/etc/ administrator@rauta-mint-turion /etc $ ls acpi chatscripts emacs group insserv.conf.d logrotate.conf mysql php5 rc6.d smbnetfs.conf UPower adduser.conf ConsoleKit environment group- iproute2 logrotate.d nanorc phpmyadmin rc.local snmp usb_modeswitch.conf alternatives console-setup esound grub.d issue lsb-base nbd-server pm rcS.d sound usb_modeswitch.d anacrontab cron.d firefox gshadow issue.net lsb-base-logging.sh ndiswrapper pnm2ppa.conf request-key.conf ssh ushare.conf apache2 cron.daily firefox-3.5 gshadow- java lsb-release netscsid.conf polkit-1 resolvconf ssl vga apm cron.hourly firestarter gtk-2.0 java-6-sun ltrace.conf network popularity-contest.conf resolv.conf sudoers vim apparmor cron.monthly fonts gtkmathview kbd ltsp NetworkManager ppp rmt sudoers.d vlc apparmor.d crontab foomatic gufw kernel lxdm networks printcap rpc su-to-rootrc vsftpd.conf apport cron.weekly fstab hal kernel-img.conf magic nsswitch.conf profile rsyslog.conf sweeprc w3m apt crypttab ftpusers hdparm.conf kerneloops.conf magic.mime ntp.conf profile.d rsyslog.d sysctl.conf wgetrc at.deny cups fuse.conf host.conf kompozer mailcap obex-data-server protocols samba sysctl.d wildmidi auto-apt dbconfig-common gai.conf hostname ldap mailcap.order ODBCDataSources psiconv sane.d teamspeak-server wodim.conf avahi dbus-1 gamin hosts ld.so.cache manpath.config odbc.ini pulse screenrc terminfo wpa_supplicant bash.bashrc debconf.conf gconf hosts.allow ld.so.conf menu openal purple securetty thunderbird X11 bash_completion debian_version gdb hosts.deny ld.so.conf.d menu-methods openoffice python security timezone xdg bash_completion.d default gdm hp legal mime.types opt python2.6 sensors3.conf transmission-daemon xml bindresvport.blacklist defoma ghostscript ifplugd lftp.conf mke2fs.conf pam.conf python2.7 sensors.d ts.conf xulrunner-1.9.2 blkid.conf deluser.conf gimp inetd.conf libpaper.d modprobe.d pam.d python3.1 services ucf.conf zsh_command_not_found blkid.tab depmod.d gnome init libreoffice modules pango rc0.d sgml udev bluetooth dhcp gnome-system-tools init.d linuxmint motd papersize rc1.d shadow ufw bonobo-activation dhcp3 gnome-vfs-2.0 initramfs-tools locale.alias mplayer passwd rc2.d shadow- updatedb.conf ca-certificates dictionaries-common gnome-vfs-mime-magic inputrc localtime mtab passwd- rc3.d shells update-manager ca-certificates.conf doc-base gre.d insserv logcheck mtab.fuselock pcmcia rc4.d skel update-motd.d calendar dpkg groff insserv.conf login.defs mtools.conf perl rc5.d smb2www update-notifier administrator@rauta-mint-turion /etc $ cd ltsp/ administrator@rauta-mint-turion /etc/ltsp $ ls dhcpd.conf ltsp-update-image.conf administrator@rauta-mint-turion /etc/ltsp $ cat dhcpd.conf # # Default LTSP dhcpd.conf config file. # authoritative; subnet 192.168.2.0 netmask 255.255.255.0 { range 192.168.2.70 192.168.2.79; option domain-name "jarvinen"; option domain-name-servers 192.168.2.254; option broadcast-address 192.168.2.255; option routers 192.168.2.254; # next-server 192.168.0.1; # get-lease-hostnames true; option subnet-mask 255.255.255.0; option root-path "/opt/ltsp/i386"; if substring( option vendor-class-identifier, 0, 9 ) = "PXEClient" { filename "/ltsp/i386/pxelinux.0"; } else { filename "/ltsp/i386/nbi.img"; } } administrator@rauta-mint-turion /etc/ltsp $

    Read the article

  • Windows Server 2003 Terminal Server does not give out all available licenses (solved)

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups. Update (4/12/10): The server has changed the entries in the Terminal Server Licensing a few times. After installing the licenses it added an entry of which the exact phrasing I forgot but it was about temporary Windows 2003 device licenses. Later it added Windows Server 2003 - TS Per Device CAL. The temporary held 2 licenses (standard RDP licenses, I think) and the other 10. At some point, seemingly unrelated from the testing we did, it used a licenses from the new pool. This morning, 2 licenses were used from the pool of 10 and only 1 from the temporary/RDP pool (I wish I had screenshots to show, it changed every few hours oir so it seems). Although I had already activated the server over the internet, and re-activated it, I decided to go through the whole procedure by phone. Update 2 (4/12/10) The problem has been solved. It seems the activation over the web, while it said to have succeeded, did not work correctly. After activating by phone, it did work. What was different from the old setup and what put me on the wrong foot from that moment, was that I now need to create seperate user account because a session with one user account will be taken over by someone else when that account is used by that person. On the previous server, it was possible to open several sesions with the same account. We now use Per Device licenses, I'm not sure what was used before. Thanks all for the replies.

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • Apache2 name based virtual host always redirect 301

    - by Francesco
    I've got a server (runnging Debian Squeeze) with Apache 2.2, there are 4 site running there. I'm using namebased virtulhosts because I've got a single IP. Initial configuration has been made with Webmin and probably something has been messed up.. firstdomain.com is my default domain and is working correctly, seconddomain.com is another site that is working. Now I want to add lastdomain.tk as a new site, so I've made this config file: root@webamp:/etc/apache2# cat sites-available/lastdomain.tk.conf <VirtualHost *:80> DocumentRoot /home/server/Condivisione/RAID/lastdomain.tk ServerName www.alazanes.tk ServerAlias alazanes.tk </VirtualHost> I've added it to enabled-sites and restarted apache. The problem is that if I go to lastdomain.tk (or www.lastdomain.tk) I'm redirected to firstdomain.com with a 301 redirect. Both lastdomain.tk and www.lastdomain.tk are A DNS records pointing to my IP address. Strange thing is that if a change DocumentRoot of lastdomain.tk to DocumentRoot /home/server/Condivisione/RAID/Sito_SecondDomain I correctly see seconddomain.com content without being redirected (lastdomain.tk is showed on address bar) These are the other configurations I'm using. root@webamp:/root# source /etc/apache2/envvars ; /usr/sbin/apache2 -S VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 webamp.firstdomain.com (/etc/apache2/sites-enabled/ssl.bbteam:1) *:80 is a NameVirtualHost default server firstdomain.com (/etc/apache2/sites-enabled/000-default:7) port 80 namevhost firstdomain.com (/etc/apache2/sites-enabled/000-default:7) port 80 namevhost www.lastdomain.tk (/etc/apache2/sites-enabled/lastdomain.tk.conf:1) ## other domains ## port 80 namevhost seconddomain.com (/etc/apache2/sites-enabled/seconddomain.com.conf:1) Syntax OK Content of default config file is root@webamp:/etc/apache2# cat sites-available/default <VirtualHost *:80> ServerAdmin [email protected] ServerName firstdomain.com ServerAlias www.firstdomain.com direct.firstdomain.com DocumentRoot /home/server/Condivisione/RAID/Sito_Web_Apache_su_80 ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> content of second domain config file is root@webamp:/etc/apache2# cat sites-available/seconddomain.com.conf <VirtualHost *:80> DocumentRoot /home/server/Condivisione/RAID/Sito_SecondDomain ServerName seconddomain.com ServerAlias www.seconddomain.com direct.seconddomain.com #redirect 301 / http://www.seconddomain.com/ <Directory "/home/server/Condivisione/RAID/Sito_SecondDomain"> allow from all Options +Indexes </Directory> </VirtualHost> Probably a file permission problem? root@webamp:/root# ls -lh /home/server/Condivisione/RAID/ total 7.1M drwxrwxr-x 15 www-data server 4.0K Jun 5 13:29 Sito_SecondDomain drwxrwxrwx 23 server server 4.0K Jun 7 16:22 Sito_Web_Apache_su_80 drwxrwxr-x 17 www-data server 4.0K Jun 8 09:56 alazanes.tk Do someone have an idea of what is happening? Thanks, Francesco

    Read the article

  • Server.CreateObject Fails when calling .Net object from ASP on 64-bit windows in IIS 32-bit mode

    - by DrFredEdison
    I have a server running Windows 2003 64-bit, that runs IIS in 32-bit mode. I have a COM object that was registered using the following command: C:\WINDOWS\microsoft.net\Framework\v2.0.50727>regasm D:\Path\To\MyDll.dll /tlb:MyTLB.tlb /codebase When I create the object via ASP I get: Server object error 'ASP 0177 : 8000ffff' Server.CreateObject Failed /includes/a_URLFilter.asp, line 19 8000ffff When I create the object in a vbs script and use the 32-bit version of cscript (in \Windows\syswow64) it works fine. I've checked permissions on the DLL, and the IUSR has Read/Execute. Even if I add the IUSR to the Administrators group, I get the same error. This is the log from ProcessMonitor filtering for the path of my dll (annotated with my actions): [Stop IIS] 1:56:30.0891918 PM w3wp.exe 4088 CloseFile D:\Path\To\MyDll.dll SUCCESS [Start IIS] [Refresh ASP page that uses DLL] 1:56:42.7825154 PM w3wp.exe 2196 QueryOpen D:\Path\To\MyDll.dll SUCCESS CreationTime: 8/19/2009 1:11:17 PM, LastAccessTime: 8/19/2009 1:30:26 PM, LastWriteTime: 8/18/2009 12:09:33 PM, ChangeTime: 8/19/2009 1:22:02 PM, AllocationSize: 20,480, EndOfFile: 20,480, FileAttributes: A 1:56:42.7825972 PM w3wp.exe 2196 QueryOpen D:\Path\To\MyDll.dll SUCCESS CreationTime: 8/19/2009 1:11:17 PM, LastAccessTime: 8/19/2009 1:30:26 PM, LastWriteTime: 8/18/2009 12:09:33 PM, ChangeTime: 8/19/2009 1:22:02 PM, AllocationSize: 20,480, EndOfFile: 20,480, FileAttributes: A 1:56:42.7826961 PM w3wp.exe 2196 CreateFile D:\Path\To\MyDll.dll SUCCESS Desired Access: Generic Read, Disposition: Open, Options: Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: Read, Delete, AllocationSize: n/a, Impersonating: SERVER2\IUSR_SERVER2, OpenResult: Opened 1:56:42.7827194 PM w3wp.exe 2196 CreateFileMapping D:\Path\To\MyDll.dll SUCCESS SyncType: SyncTypeCreateSection, PageProtection: 1:56:42.7827546 PM w3wp.exe 2196 CreateFileMapping D:\Path\To\MyDll.dll SUCCESS SyncType: SyncTypeOther 1:56:42.7829130 PM w3wp.exe 2196 Load Image D:\Path\To\MyDll.dll SUCCESS Image Base: 0x6350000, Image Size: 0x8000 1:56:42.7830590 PM w3wp.exe 2196 Load Image D:\Path\To\MyDll.dll SUCCESS Image Base: 0x6360000, Image Size: 0x8000 1:56:42.7838855 PM w3wp.exe 2196 CreateFile D:\Webspace\SecurityDll\bin SUCCESS Desired Access: Read Data/List Directory, Synchronize, Disposition: Open, Options: Directory, Synchronous IO Non-Alert, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, Impersonating: SERVER2\IUSR_SERVER2, OpenResult: Opened 1:56:42.7839081 PM w3wp.exe 2196 QueryDirectory D:\Path\To\MyDll.INI NO SUCH FILE Filter: SecurityDll.INI 1:56:42.7839281 PM w3wp.exe 2196 CloseFile D:\Webspace\SecurityDll\bin SUCCESS [Refresh ASP page that uses DLL] [Refresh ASP page that uses DLL] [Refresh ASP page that uses DLL] This dll works fine on other servers, running 32-bit windows. I can't think of anything else that would make this work. Any suggestions? UPDATE The .dll is not in the GAC, it is compiled as 32-bit, and is Strongly signed.

    Read the article

  • Creating a dynamic lacp trunk from HP Procurve 2412zl to Proliant DL380 G7

    - by Maalobs
    I'm configuring an IEEE 802.3ad (LACP) dynamic trunk from a HP Procurve 2412zl (firmware version K.15.07) switch to a HP Proliant DL380 G7 server. The DL380 has 4 NICs and is running Win2008 R2, so I'm teaming the NICs together and leaving everything on the recommended "automatic" setting in the HP NIC configuration tool. The server is one of two, they'll be connected on interfaces F17-F20 and F21-F24 respectively on the switch. I need the servers in a separate VLAN, here is the configuration for the VLAN: vlan 10 name "Lab_Mgmt" untagged B2,F17-F24 ip address 172.22.71.3 255.255.255.0 tagged B21 exit There is a DHCP-relay into the VLAN 10 from another device beyond interface B21. The Advanced Traffic Management Guide says that in order to run a dynamic LACP trunk on another VLAN besides the DEFAULT_VLAN, you need to first enable GVRP and then use "forbid" to stop the interfaces from automatically joining DEFAULT_VLAN when the dynamic trunk is created. GVRP brings some other stuff with it that I don't want or need, so I disable it with "unknown-vlans disable" on all other interfaces. Here is how I do it: procurve-5412zl-1(config)# gvrp procurve-5412zl-1(config)# interface A1-A24,B1-B24,C1-C24,D1-D10,D13-D24,E1-E24, F1-F16,K1,K2 unknown-vlans disable procurve-5412zl-1(config)# vlan 1 forbid F17-F24 procurve-5412zl-1(config)# interface F17-F20 lacp active The result afterwards looks all successful: procurve-5412zl-1(config)# show trunks Load Balancing Method: L3-based (Default), L2-based if non-IP traffic Port | Name Type | Group Type ---- + -------------------------------- --------- + ------ -------- F17 | XYZTEAM3_NIC1 100/1000T | Dyn2 LACP F18 | XYZTEAM3_NIC2 100/1000T | Dyn2 LACP F19 | XYZTEAM3_NIC3 100/1000T | Dyn2 LACP F20 | XYZTEAM3_NIC4 100/1000T | Dyn2 LACP procurve-5412zl-1(config)# vlan 10 procurve-5412zl-1(vlan-10)# show lacp LACP LACP Trunk Port LACP Admin Oper Port Enabled Group Status Partner Status Key Key ---- ------- ------- ------- ------- ------- ------ ------ F17 Active Dyn2 Up Yes Success 0 0 F18 Active Dyn2 Up Yes Success 0 0 F19 Active Dyn2 Up Yes Success 0 0 F20 Active Dyn2 Up Yes Success 0 0 On the Proliant server, the NIC configuration Tool is also indicating that a 802.3ad dynamic trunk has been established. Everything should be good, but it isn't. The server is not getting an IP-address from the DHCP, which it does if I'm not enabling LACP. If I configure the server to a static IP-address on the VLAN 10 subnet, it can't even ping the switch IP-address, much less anything outside of the VLAN. The switch can't ping the server either. I did another attempt with F17-F20 tagged, and checking the box "Default Native Tag (VLAN 10)" in the NIC configuration tool on the server, but there was no difference. Does anyone have any idea what I might have missed?

    Read the article

  • windows 2003 server : can't find a primary authoritative dns server for the name srv.domain1.local [

    - by phill
    I originally tried to rejoin a computer to a network which led to a "cannot find domain" error. The username/password box don't even come up. some tests i ran: I can ping the server, however I can't ping the domain name domain1.local. nslookup can't find the domain either. It looks to the isp's dns instead of my own to resolve the local machines. So i go to the dns and run netdiag.exe and gives me this error. DNS test . . . . . . . . . . . . . : Failed [WARNING] Cannot find a primary authoritative DNS server for the name 'stmartinsrv.stmartin.local.'. [RCODE_SERVER_FAILURE] The name 'srv.domain1.local.' may not be registered in DNS. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.156.1'. Please wait for 30 minutes for DNS server replication. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.157.1'. Please wait for 30 minutes for DNS server replication. [FATAL] No DNS servers have the DNS records for this DC registered. Redir and Browser test . . . . . . : Passed List of NetBt transports currently bound to the Redir NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The redir is bound to 1 NetBt transport. List of NetBt transports currently bound to the browser NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The browser is bound to 1 NetBt transport. from previous postings, I've tried adding the domain suffix to the nic ip properties to both the client machine and the dc server which didn't help. any ideas? thanks in advance

    Read the article

  • Xen won't start after it had been working

    - by Paul Tomblin
    I've been setting up this Debian Stable system with a dom0 and 3 domUs. It was working fine for several days, and I'm almost ready to deploy it to the rack. But last night I shut it down with all three domUs still running for the first time, and today when I started it up, xend won't start. In /var/log/messages, I have: Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: blktapctrl: v1.0.0 Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [raw image (aio)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [raw image (sync)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [vmware image (vmdk)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [ramdisk image (ram)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [qcow disk (qcow)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: couldn't find device number for 'blktap0' Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Unable to start blktapctrl and in /var/log/xen/xend.log, I have this: [2010-04-18 12:46:32 3523] INFO (SrvDaemon:219) Xend exited with status 1. [2010-04-18 13:01:34 4255] INFO (SrvDaemon:331) Xend Daemon started [2010-04-18 13:01:34 4255] INFO (SrvDaemon:335) Xend changeset: unavailable. [2010-04-18 13:01:34 4255] INFO (SrvDaemon:342) Xend version: Unknown. [2010-04-18 13:01:34 4255] ERROR (SrvDaemon:353) Exception starting xend (no element found: line 1, column 0) Traceback (most recent call last): File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvDaemon.py", line 345, in run servers = SrvServer.create() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvServer.py", line 251, in create root.putChild('xend', SrvRoot()) File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvRoot.py", line 40, in __init__ self.get(name) File "/usr/lib/xen-3.2-1/lib/python/xen/web/SrvDir.py", line 82, in get val = val.getobj() File "/usr/lib/xen-3.2-1/lib/python/xen/web/SrvDir.py", line 52, in getobj File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvNode.py", line 30, in _ _init__ self.xn = XendNode.instance() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendNode.py", line 709, in instance inst = XendNode() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendNode.py", line 164, in __init__ saved_pifs = self.state_store.load_state('pif') File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendStateStore.py", line 104, in load_state dom = minidom.parse(xml_path) File "/usr/lib/python2.5/xml/dom/minidom.py", line 1915, in parse return expatbuilder.parse(file) File "/usr/lib/python2.5/xml/dom/expatbuilder.py", line 924, in parse result = builder.parseFile(fp) File "/usr/lib/python2.5/xml/dom/expatbuilder.py", line 211, in parseFile parser.Parse("", True) ExpatError: no element found: line 1, column 0 [2010-04-18 13:01:34 4253] INFO (SrvDaemon:219) Xend exited with status 1. Any clues as to what might be going wrong?

    Read the article

  • multiple puppet masters

    - by Oli
    I would like to set up an additional puppet master but have the CA server handled by only 1 puppet master. I have set this up as per the documentation here: http://docs.puppetlabs.com/guides/scaling_multiple_masters.html I have configured my second puppet master as follows: [main] ... ca = false ca_server = puppet-master1.test.net I am using passenger so I am a bit confused how the virtual-host.conf file should look for my second puppet-master2.test.net. Here is mine (updated as per Shane Maddens answer): LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18 PassengerRuby /usr/bin/ruby Listen 8140 <VirtualHost *:8140> ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet-master1.test.net:8140/$1 SSLEngine on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP SSLCertificateFile /var/lib/puppet/ssl/certs/puppet-master2.test.net.pem SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/puppet-master2.test.net.pem #SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem #SSLCACertificateFile /var/lib/puppet/ssl/ca/ca_crt.pem # If Apache complains about invalid signatures on the CRL, you can try disabling # CRL checking by commenting the next line, but this is not recommended. #SSLCARevocationFile /var/lib/puppet/ssl/ca/ca_crl.pem SSLVerifyClient optional SSLVerifyDepth 1 # The `ExportCertData` option is needed for agent certificate expiration warnings SSLOptions +StdEnvVars +ExportCertData # This header needs to be set if using a loadbalancer or proxy RequestHeader unset X-Forwarded-For RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e DocumentRoot /etc/puppet/rack/public/ RackBaseURI / <Directory /etc/puppet/rack/> Options None AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have commented out the #SSLCertificateChainFile, #SSLCACertificateFile & #SSLCARevocationFile - this is not a CA server so not sure I need this. How would I get passenger to work with these? I would like to use ProxyPassMatch which I have configured as per the documentation. I don't want to specify a ca server in every puppet.conf file. I am getting this error when trying to get create a cert from a puppet client pointing to the second puppet master server (puppet-master2.test.net): [root@puppet-client2 ~]# puppet agent --test Error: Could not request certificate: Could not intern from s: nested asn1 error Exiting; failed to retrieve certificate and waitforcert is disabled On the puppet client I have this [main] server = puppet-master2.test.net What have I missed? -- update Here is a new virtual host file on my secondary puppet master. Is this correct? I have SSL turned off? LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18 PassengerRuby /usr/bin/ruby # you probably want to tune these settings PassengerHighPerformance on PassengerMaxPoolSize 12 PassengerPoolIdleTime 1500 # PassengerMaxRequests 1000 PassengerStatThrottleRate 120 RackAutoDetect Off RailsAutoDetect Off Listen 8140 <VirtualHost *:8140> SSLEngine off ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet-master1.test.net:8140/$1 # Obtain Authentication Information from Client Request Headers SetEnvIf X-Client-Verify "(.*)" SSL_CLIENT_VERIFY=$1 SetEnvIf X-SSL-Client-DN "(.*)" SSL_CLIENT_S_DN=$1 DocumentRoot /etc/puppet/rack/public/ RackBaseURI / <Directory /etc/puppet/rack/> Options None AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> Cheers, Oli

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >