Search Results

Search found 5718 results on 229 pages for 'resource'.

Page 168/229 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • S11 launched

    - by unixman
    Now that Oracle Solaris 11 is out, its time to do 2 things -- 1) Its time to see what's in it, what's new and why its important, and then assess why it might make sense to begin evaluating it for your needs and 2) Its time to acknowledge, give thanks to and congratulate all the R&D personnel, architects, engineers, designers and testers who've put in so much effort and energy into helping make Solaris 11 (and SunOS 5.11) what it has become -- starting way back circa 2004 and, more importantly, culminating in the recent years and months -- staying focused on the execution, unwavering in the face of various challenges. For #1 above, here are a few good things to get going with - Watch the product launch replay - Visit the Solaris 11 Spotlight section on oracle.com - Get comfortable through introductory videos and detailed "how-to" guides (ex: how to create and publish IPS packages), white papers on the new default root file system, ZFS, and reap the benefits brought on by the fundamental shift in easing the administration experience - Look at the next level of software lifecycle management that is enabled by technologies such as Automated Installer and Image Packaging System -- that dramatically address patch management-related challenges - Understand how we continue to innovate in areas of service intelligence, reliability and availability - Start to evaluate enhancements in virtualization capabilities -- whether influenced by the need to consolidate or motivated by the need to have increased service mobility across physical systems, leveraging hardware-level abstractions - Gain more control over your network-centric services through enhancements in network resource management, observability and I/O performance - Look beyond your existing infrastructure with confidence that you can re-host and transition to newer systems with the use of Solaris 10 zones running on top of Solaris 11 - Relish in the fact that you can do all this, get your data to be secure and encrypted and more, on both, SPARC and x86-based systems. - Stay informed by keeping an eye on relevant blogs, which we've begun turning up recently. - Go through a hands-on lab - Sign up to take a class or just opt to watch various videos to begin to raise your comfort level with these technologies For #2 above -- There are many ways to do that. One way is to just say "thanks" with an email, a post, or a simple card,  similar to this one seen at a Barnes and Noble store recently.  The front of the card is followed by what's inside... and as the saying goes, now more then ever "it's what's inside that counts" And here's the inside of the card: So, what are you waiting for ? Go download and try it out, and please let us know what you think of it!

    Read the article

  • The Minimalist's Approach to Content Governance

    - by Kellsey Ruppel
    This week on the blog, we want to focus on the content lifecylce and how important it is to have the tools in place to be able to properly manage all te phases of the content lifecylce. John Brunswick has some great advice when it comes to this topic, so expect to hear a lot from him this week! Originally posted by John Brunswick. Let's be honest - content governance is far from an exciting topic. BUT the potential of a very small intranet team creating and maintaining a platform that provides an organization with relevant, high value information, helping workers to get their jobs done with greater accuracy and in less time is exciting. It is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year.   What can be done to bridge this gap? Over the next few blog entries let's take a pragmatic, minimalistic view of a process that can help any team manage a wealth of unstructured information. Based on an earlier article that I wrote around Portal Governance, I am going to focus on using technology as much as possible to support the governance of content with minimal involvement from users. The only certainty about content production is that business users are not fans of maintaining content. Maintenance is overhead and is a long-term investment thats value will possibly not be realized under the current content creator's watch. To add context to how we will use technical tools in this process, each post will highlight one section of the content lifecycle process as outlined below Content Lifecycle Stages 1. Request - Understand the education, purpose, resource and success criteria for content 2. Create - Determine access and workflow for content 3. Manage - Understand ownership and review cycles 4. Retire - Act on thresholds established during the request stage Within each state we will also elaborate as to 1. Why - why would we entertain doing this? 2. How - the steps that are needed to make it happen 3. Impact - what is the net benefit or loss based on the process Over the course of this week, we will dive deep into the stages and the minimal amount of time, effort and process within each to make some meaningful gains in the improvement of user experience and productivity in their search for information. It might be a stretch to say that we can make content governance exciting, but hopefully it can end up being painless and paying dividends. And if you'd like to hear first hand from a customer that is managing their content lifecycle with Oracle WebCenter, be sure to join us on Wednesday for this webcast "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter"!

    Read the article

  • Cloud Infrastructure has a new standard

    - by macoracle
    I have been working for more than two years now in the DMTF working group tasked with creating a Cloud Management standard. That work has culminated in the release today of the Cloud Infrastructure Management Interface (CIMI) version 1.0 by the DMTF. CIMI is a single interface that a cloud consumer can use to manage their cloud infrastructure in multiple clouds. As CIMI is adopted by the cloud vendors, no more will you need to adapt client code to each of the proprietary interfaces from these multiple vendors. Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, CIMI is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience. What does CIMI allow? CIMI is both a model for the resources (computing, storage networking) in the cloud as well as a RESTful protocol binding to HTTP. This means that to create a Machine (guest VM) for example, the client creates a “document” that represents the Machine resource and sends it to the server using HTTP. CIMI allows the resources to be encoded in either JavaScript Object Notation (JSON) or the eXentsible Markup Language (XML). CIMI provides a model for the resources that can be mapped to any existing cloud infrastructure offering on the market. There are some features in CIMI that may not be supported by every cloud, but CIMI also supports the discovery of which features are implemented. This means that you can still have a client that works across multiple clouds and is able to take full advantage of the features in each of them. Isn’t it too early for a standard? A key feature of a successful standard is that it allows for compatible extensions to occur within the core framework of the interface itself. CIMI’s feature discovery (through metadata) is used to convey to the client that additional features that may be vendor specific have been implemented. As multiple vendors implement such features, they become candidates to add the future versions of CIMI. Thus innovation can continue in the cloud space without being slowed down by a lowest common denominator type of specification. Since CIMI was developed in the open by dozens of stakeholders who are already implementing infrastructure clouds, I expect to CIMI being adopted by these same companies and others over the next year or two. Cloud Customers who can see the benefit of this standard should start to ask their cloud vendors to show a CIMI implementation in their roadmap.  For more information on CIMI and the DMTF's other cloud efforts, go to: http://dmtf.org/cloud

    Read the article

  • Oracle Linux Partner Pavilion Spotlight - Part II

    - by Ted Davis
    As we draw closer to the first day of Oracle OpenWorld, starting in less than a week, we continue to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033). We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. In today's post we highlight three additional Oracle Linux / Oracle VM Partners from the pavilion. Micro Focus delivers mainframe solutions software and software delivery tools with its Borland products. These tools are grouped under the following solutions: Analysis and testing tools for JDeveloper Micro Focus Enterprise Analyzer is key to the success of application overhaul and modernization strategies by ensuring that they are based on a solid knowledge foundation. It reveals the reality of enterprise application portfolios and the detailed constructs of business applications. COBOL for Oracle Database, Oracle Linux, and Tuxedo Micro Focus Visual COBOL delivers the next generation of COBOL development and deployment. Itbrings the productivity of the Eclipse IDE to COBOL, and provides the ability to deploy key business critical COBOL applications to Oracle Linux both natively and under a JVM. Migration and Modernization tooling for mainframes Enterprise application knowledge, development, test and workload re-hosting tools significantly improves the efficiency of business application delivery, enabling CIOs and IT leaders to modernize application portfolios and target platforms such as Oracle Linux. When it comes to Oracle Linux database environments, supporting high transaction rates with minimal response times is no longer just a goal. It’s a strategic imperative. The “data deluge” is impacting the ability of databases and other strategic applications to access data and provide real-time analytics and reporting. As such, customer demand for accelerated application performance is increasing. Visit LSI at the Oracle Linux Pavilion, #733, to find out how LSI Nytro Application Acceleration products are designed from the ground up for database acceleration. Our intelligent solid-state storage solutions help to eliminate I/O bottlenecks, increase throughput and enable Oracle customers achieve the highest levels of DB performance. Accelerate Your Exadata Success With Teleran. Teleran’s software solutions for Oracle Exadata and Oracle Database reduce the cost, time and effort of migrating and consolidating applications on Exadata. In addition Teleran delivers visibility and control solutions for BI/data warehouse performance and user management that ensure service levels and cost efficiency.Teleran will demonstrate these solutions at the Oracle Open World Linux Pavilion: Consolidation Accelerator - Reduces the cost, time and risk ofof migrating and consolidation applications on Exadata. Application Readiness – Identifies legacy application performance enhancements needed to take advantage of Exadata performance features Workload Accelerator – Identifies and clusters workloads for faster performance on Exadata Application Visibility and Control - Improves performance, user productivity, and alignment to business objectives while reducing support and resource costs. Thanks for reading today's Partner Spotlight. Three more partners will be highlighted tomorrow. If you missed our first Partner Spotlight check it out here.

    Read the article

  • Wifi problems after upgrading to 13.10

    - by Simon
    I just upgraded to Ubuntu 13.10, but since the upgrade I don't have internet access via wifi anymore. I can: See networks Connect to a network Ping myself (localhost, 192.168.0.103) I can't: Ping others (including other devices on the same wireless network, including the gateway/router) Resolve hosts Access any other external resource, whether on my own network or on the internet Using Wireshark, I noticed my computer is continuously sending ARP-requests like "Who has 192.168.0.1 [which is the gateway]? Tell 192.168.0.103". It doesn't get any replies though. When I ping another IP-address for which it knows the mac-address (from cache), it turns out a packet loss of 90% occurs, and even if a packet manages to arrive it takes around 3000ms. The output of route -n is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 9 0 0 eth1 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 Before upgrading, wifi worked fine. Using other devices, wifi still works fine.Resetting the router didn't help. Ethernet still works after upgrading. Any suggestions? Update: I'm using the wl driver. Here's the relevant output of some commands: lspci | grep Wireless 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter (rev 01) cat /etc/modprobe.d/blacklist.conf [...] blacklist mac80211 blacklist brcm80211 blacklist cfg80211 blacklist lib80211_crypt_tkip blacklist lib80211 blacklist b43 cat /etc/rc.local sudo modprobe -r lib80211 sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_wep.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_tkip.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_ccmp.ko sudo modprobe wl exit 0 The last lines are probably how I got wireless working after the previous upgrade (wireless has been a problem after each upgrade). Update 2: added information about the exact hardware below. The hardware is an integrated device, so I ran lspci -nn | grep -i network. The output is: 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01)

    Read the article

  • Watch ON-Demand Oracle's 4th Annual Primavera Virtual Summit

    - by Melissa Centurio Lopes
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Did you miss Oracle's 4th Annual Primavera Virtual Summit? Or, maybe you attended the virtual event live, but want to re-watch some presentations or download industry-specific assets - well you can! Watch and visit here, on-demand.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} This event is now available On-Demand for your convenience. Feel free to log-in anytime to listen again about: New cloud based PPM solutions presentations. Special Primavera Roadmap presentation- what's new and what's coming. Customer stories highlighting their successes with Primavera Unifier, Oracle Instantis and Primavera P6 Enterprise Project Portfolio Management. And don't forget to visit the Resource Library to check out all the latest product information, whitepapers, and demos which are all available to download and read at your convenience. During your visit to the Virtual Summit ON-Demand, we would appreciate you visiting the Networking Lounge and taking a few minutes to fill out the survey. We would really like to hear what you think. You can also visit the Primavera Website for more information about the newest EPPM solutions and offerings.

    Read the article

  • How can I cleanly and elegantly handle data and dependancies between classes

    - by Neophyte
    I'm working on 2d topdown game in SFML 2, and need to find an elegant way in which everything will work and fit together. Allow me to explain. I have a number of classes that inherit from an abstract base that provides a draw method and an update method to all the classes. In the game loop, I call update and then draw on each class, I imagine this is a pretty common approach. I have classes for tiles, collisions, the player and a resource manager that contains all the tiles/images/textures. Due to the way input works in SFML I decided to have each class handle input (if required) in its update call. Up until now I have been passing in dependencies as needed, for example, in the player class when a movement key is pressed, I call a method on the collision class to check if the position the player wants to move to will be a collision, and only move the player if there is no collision. This works fine for the most part, but I believe it can be done better, I'm just not sure how. I now have more complex things I need to implement, eg: a player is able to walk up to an object on the ground, press a key to pick it up/loot it and it will then show up in inventory. This means that a few things need to happen: Check if the player is in range of a lootable item on keypress, else do not proceed. Find the item. Update the sprite texture on the item from its default texture to a "looted" texture. Update the collision for the item: it might have changed shape or been removed completely. Inventory needs to be updated with the added item. How do I make everything communicate? With my current system I will end up with my classes going out of scope, and method calls to each other all over the place. I could tie up all the classes in one big manager and give each one a reference to the parent manager class, but this seems only slightly better. Any help/advice would be greatly appreciated! If anything is unclear, I'm happy to expand on things.

    Read the article

  • Help writing server script to ban IP's from a list

    - by Chev_603
    I have a VPS that I use as an openvpn and web server. For some reason, my apache log files are filled with thousands of these hack attempts: "POST /xmlrpc.php HTTP/1.0" 404 395 These attack attempts fill up 90% of my logs. I think it's a WordPress vulnerability they're looking for. Obviously they are not successful (I don't even have Wordpress on my server), but it's annoying and probably resource consuming as well. I am trying to write a bash script that will do the following: Search the apache logs and grab the offending IP's (even if they try it once), Sort them into a list with each unique IP on a seperate line, And then block them using the IP table rules. I am a bash newb, and so far my script does everything except Step 3. I can manually block the IP's, but that's tedious and besides, this is Linux and it's perfectly capable of doing it for me. I also want the script to be customizable so that I (or anyone else who wants to use it) can change the variables to suit whatever situation I/they may deal with in the future. Here is the script so far: #!/bin/bash ##IP LIST GENERATOR ##Author Chev Young ##Script to search Apache logs and list IP's based on custom filters ## ##Define our variables: DIRECT=~/Script ##Location of script&where to put results/temp files LOGFILE=/var/log/apache2/access.log ## Logfile to search for offenders TEMPLIST=xml_temp ## Temporary file name IP_LIST=ipstoban ## Name of results file FILTER1=xmlrpc ## What are we looking for? (Requests we want to ban) cd $DIRECT if [ ! -f $TEMPLIST ];then touch $TEMPLIST ##Create temp file fi cat $LOGFILE | grep $FILTER1 >> $DIRECT/$TEMPLIST ## Only interested in the IP's, so: sed -e 's/\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\).*$/\1/' -e t -e d $DIRECT/$TEMPLIST | sort | uniq > $DIRECT/$IP_LIST rm $TEMPLIST ## Clean temp file echo "Done. Results located at $DIRECT/$IP_LIST" So I need help with the next part of the script, which should ban the IP's (incoming and perhaps outgoing too) from the resulting $IP_LIST file. I don't care if it utilizes UFW or IPTables directly, as long as it bans the IP's. I'd probably run it as a cron task. What I'm having trouble with is understanding how to use line of the result file as a seperate variable to do something like: ufw deny $IP1 $IP2 $IP3, ect Any ideas? Thanks.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 13

    - by MarkPearl
    Learning Outcomes Explain the advantages of using a large number of registers Discuss the way in which compilers optimize register usage Discuss the evolution of CISC machines Describe the characteristics of RISC architecture Discuss the RISC vs. CISC controversy Describe the way in which RISC and CISC design principles can be combined Instruction Execution Characteristics To understand the the line of reasoning of RISC advocates, we need a brief overview of instruction execution characteristics. These include… Operations Operands Procedure Calls These three sections can be studied in depth in the textbook at pages 503 - 505 A number of groups have come up with the conclusion that the attempt to make the instruction set architecture closer to HLLs (High Level Languages) is not the most effective design strategy. Rather HLL’s can be best supported by optimizing performance of the most time-consuming features of typical HLL programs. Generally 3 main characteristics came up to improve performance… Use a large number of registers or use a compiler to optimize register usage Careful attention needs to be paid to the design of instruction pipelines A simplified (reduced) instruction set is indicated The use of a large register optimization One of the most important design principles of RISC machines is the use of a large number of registers. The concept of register windows and the use of a large register file versus the use of cache memory are discussed. On the face of it, the use of a large set of registers should decrease the need to access memory. The design task is to organize the registers in such a fashion that this goal is realized. Read page 507 – 510 for a detailed explanation. Compiler-based register optimization   Reduced Instructions Set Architecture There are two advantages to smaller programs… Because the program takes up less memory, there is a savings in that resource (this was more compelling when memory was more expensive) Smaller programs should improve performance, and this will happen in two ways – fewer instructions means fewer instruction bytes to be fetched and in a paging environment smaller programs occupy fewer pages, reducing page faults. Certain characteristics are common to RISC processors… One instruction per cycle Register-to-register operations Simple addressing modes Simple instruction formats RISC vs. CISC After initial enthusiasm for RISC machines, there has been a growing realization that RISC designs may benefit from the inclusion of some CISC features CISC designs may benefit from the inclusion of some RISC features

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • database independent coding framework options?

    - by statirasystems
    Background: I have not programmed in a while besides doing VBA and a little VB.NET. So please forgive my language use. I'm green and have a head cold. I am reading all I can now, but I have no programming circles to draw from. The information I am providing is to help guide you to what I am looking for. I am not confident I can ask the question properly. Story: I have four different projects that I am starting. Obviously I won't be working on all at the same time however they each will have similar needs and be inter related. They are as follows: Desktop Environment/System User Interface - basically a product that runs on major computers via mono or .net that unifies the look and functions. In the context of the up coming question it would be able to directly access data of various types. It would work in tandum with my office suite, system manager, and network application framework. Office Suite - technically it would not be a suite since I will be doing it from one interfacel except for the Communications Application. As far as the question, it will need to be able to link to various data sources for storing files and using, manipulating, and presenting information. System Manager - an intellegent system to manage and administer the entire network and all equipment. As far as the question, needs to be able to access data for archiving and and for accessing it's own settings stored in various formats, sql or xml. Network Application Framework - A complete system that can be used for ERP, CRM, CMS, Errata, File Management, and so on. As to the question to be able to access it's own or interlink with existing applications. Requirement: C#, Simplifies and reduces coding, use the same code to access diffent databases(ie MySQL, MS SQL, ACCESS, XML, ...), Mono would be nice but not a must, Question: What librarys, frameworks, or other options would be able to help with this? Is there a good resource to guide me? I don't want arguing over what is best, just information to help me further understand and make an educated decision.

    Read the article

  • ??????????

    - by Steve He(???)
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ???????????Oracle????????! ??????,??????????????????????,?????????Oracle??????????????????,??,???????????????????????? ??????: PowerView ??????? PowerView???My Oracle Support?????????,??????????????????????(Dashboard),?????????,?????????????????????,???,??ID,??,???,?????????????PowerView???PowerView?,????????? ?????????????????? ?????????????,????????,?????????? PowerView ???My Oracle Support???,???? ?? PowerView is Off??????PowerView on?? PowerView is On, ???????,????????PowerView? ??????????????????,??/?????,???????????? 3????????? PowerView ! ?? PowerView ????????? New?  ?Select Filter Type ????????????,????2??????????? ??: ???????????,???????????????????????? ??(+) ?????????(-)??????? ??Create ?????????? ?????? PowerView is On ???????? ?? PowerView ???,?????? Learn more about PowerView… ??? short video. ???????:???????????????? ?????Dashboard ?? Knowledge Base??,??????,??????,??????(??????)??????????????? - ??????????????????????? - ?????????? ?????????,??“Information Center” ??????,????????,?????????????????24????,????????????? ?: ???????? “information centers”,??????,???Search ??????? ??? information center,?????????????????????,???????? ?????????????????! ??,??????,??????,??????,??????,???????,?? Search. ??: ??????,???PowerView?????? PeopleSoft Enterprise??PowerView??,???????????????,????????PowerView??????? ???????????????? ?????! ????: My Oracle Support - User Resource Center [ID 873313.1] My Oracle Support Community

    Read the article

  • HAProxy reqrep remove URI on backend request

    - by Jim
    real quick question regarding HAProxy reqrep. I am trying to rewrite/replace the request that gets sent to the backend. I have the following example domain and URIs http://domain/web1 http://domain/web2 I want web1 to go to backend webfarm1, and web2 to go to webfarm2. Currently this does happen. However I want to strip off the web1 or web2 URI when the request is sent to the backend. Here is my haproxy.cfg frontend webVIP_80 mode http bind :80 #acl routing to backend acl web1_path path_beg /web1 acl web2_path path_beg /web2 #which backend use_backend webfarm1 if web1_path use_backend webfarm2 if web2_path default_backend webfarm1 backend webfarm1 mode http reqrep ^([^\ ]*)\ /web1/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1 10.0.0.10:80 weight 5 check slowstart 5000ms server webtest2 10.0.0.20:80 weight 5 check slowstart 5000ms backend webfarm2 mode http reqrep ^([^\ ]*)\ /web2/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1-farm2 10.0.0.110:80 weight 5 check slowstart 5000ms server webtest2-farm2 10.0.0.120:80 weight 5 check slowstart 5000ms If I go to http://domain/web1 or http://domain/web2 I see it in the error logs that the request on a server in each backend that the requst is for the resource /web1 or /web2 respectively. Therefore I believe there to be something wrong with my regular expression, even though I copied and pasted it from the Documentation. http://code.google.com/p/haproxy-docs/wiki/reqrep Summary: I'm trying to route traffic based on URI, however I want to strip the URI on the backend side. Go to http://domain/web1 -- backend request of / to webfarm1 Thank you! -Jim

    Read the article

  • SRMSVC event ID 8197

    - by Godeke
    I have a Windows 2003 R2 machine that is giving an Event ID 8197 about once an hour and ten minutes. The full error is attached below. The machine is primary used to host IIS webpages and SMTP. There is no known scheduled tasks on the machine. I have read a lot of Google Search and Microsoft docs, but none of the suggestions found there have any impact. What I am curious is if there is any way to convert the SRMVOLMC81 and SRMVOLMC57 into mount point data so I could at least know where the error is sourcing from (there are no related errors in the logs, just the 8197 every hour and ten). Event Type: Error Event Source: SRMSVC Event Category: None Event ID: 8197 Date: 2/7/2011 Time: 11:32:21 AM User: N/A Computer: SERVER001 Description: File Server Resource Manager Service error: Unexpected error. Error-specific details: Error: GetVolumeNameForVolumeMountPoint, 0x80070001, Incorrect function. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 53 52 4d 56 4f 4c 4d 43 SRMVOLMC 0008: 38 31 00 00 00 00 00 00 81...... 0010: 53 52 4d 56 4f 4c 4d 43 SRMVOLMC 0018: 35 37 00 00 00 00 00 00 57......

    Read the article

  • How to use tscon on Windows7?

    - by Radek
    I need to run overnight automation testing using RFT and IE on Windows7 virtual machine. I found that restarting the Windows box before the testing starts helps. I am moving the production environment from Windows XP to Windows 7. RFT used to complain when running RFT scripts that CRFCN0557E: Activation failed when running under a Terminal Services environment. This may be caused by using a minimized terminal window - try playing back without minimizing the terminal window (it does not need to be full-screen). Running tscon.exe 0 /dest:console prior starting any RFT script fix the error on Windows XP. But not on Windows7. I did some research and was trying for hours to fix that but nothing helped. There is no screen saver turned on on Windows7. I tried to run both but nothing helped. tscon.exe 0 /dest:console tscon.exe 1 /dest:console On Windows7 tscon returns {ErrorPrintf(): LoadString failed, Error 15105, (0x00003B01)} Error [15105]:The resource loader cache doesn't have loaded MUI entry. Error [0]:The operation completed successfully. On Windows XP tscon returns Could not connect sessionID 0 to sessionname console, Error code 7045 Error [7045]:The requested session access is denied. I just double checked that running tscon.exe 0 /dest:console on Windows XP solves the issue. Cannot understand the output of the tscon command then. Any idea how I can run RFT scripts after I restart the Windows box automatically? Preferably without involving any other computer. I was even thinking to use the old Windows XP to make remote desktop session to make RFT happy. I hope there is other better solution to that.

    Read the article

  • SQL Server express service is not starting

    - by Mahdi Ghiasi
    I've bought my first VPS yesterday, and I have installed Microsoft SQL Server 2012 Express on it. Then I have restarted my VPS. But SQL Server Service didn't start. I've tried to start it manually, but It can't start: What is the problem? How to solve it? P.S: This is my first server management, and I'm a newbie, if you need any further details about this, please leave a comment. I'll update the question. Update 1: This is some log details from Event viewer that I thought that they may be useful for this problem: FCB::Open failed: Could not open file e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\MSDBData.mdf for file number 1. OS error: 3(The system cannot find the path specified.). The resource database build version is 11.00.3000. This is an informational message only. No user action is required. FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf'. Diagnose and correct the operating system error, and retry the operation. Starting up database 'model'. FCB::Open failed: Could not open file e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\model.mdf for file number 1. OS error: 3(The system cannot find the path specified.). FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\modellog.ldf'. Diagnose and correct the operating system error, and retry the operation. I'm confused about these e:\s, my VPS has just one C:\ drive, So what is e:\ ?

    Read the article

  • Request Entity Too Large error while uploading files of more than 128KB over SSL

    - by tushar
    We have a web portal setup on Java spring framework. It running on tomcat app server. Portal is served through apache web server connected to tomcat through JK connector. Entire portal is HTTPS enabled using 443 port of apache. Apache version is : Apache/2.4.2 (Unix). it is the latest stable version of apache web server. Whenever we try to upload files more than 128 KB into the portal, We are facing 413 error: Request Entity Too Large The requested resource /teamleadchoachingtracking/doFileUpload does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. In the apache error log we get the following errors: AH02018: request body exceeds maximum size (131072) for SSL buffer AH02257: could not buffer message body to allow SSL renegotiation to proceed We did search over google and there were suggestions to put SSLRenegBufferSize as some high value like 10MB. Based on these suggestions, we had put the following entry in virtualhost section of httpd config file: SSLRenegBufferSize 10486000 But still the error persists. Also we have specified SSLVerifyClient none, but still renegotiation is happening. This is a very inconsistent and frustrating error. Any help will be highly appreciated. Many thanks in advance.

    Read the article

  • HYPER-V R2 Can not mount ISO from network location (UNC Path)

    - by Entity_Razer
    So, as the name suggest I'm trying to mount a ISO from a network share using the UNC path to a HYPER-V R2 Cluster. This is a pure Demo / test case setup with: 2x HYPER-V R2 1X NAS/iSCSI CSV Cluster Management is happening through the MMC with RSAT tools. So what i've done so far is: Set up the cluster and configure Quorum, add CSV Shares and disks, set up 1 Virtual Machine on the Hyper-1 node. What i'm trying to do is, you go to settings --- DVD Drive --- use network location ---- Pick ISO file and press "apply". Error I'm getting is either "User account does not have rights to mount iso". I changed that or stopped getting that message when I went to the HYPER-V Node settings and tabbed on: "Use Default Credentials Automatically". Now I stopped getting the "user does not have right..." message but I get the following: Error applying DVD Drive Changes Failed to remove device microsoft synthetic DVD Drive:" the specified network resource or device is no longer available" I've google'd the problem but am unable to find a solution. Anyone here able to help me out ? Much abbliged !

    Read the article

  • Fetching e-mails into Redmine via IMAP

    - by Danilo Bargen
    I'm trying to fetch e-mails into Redmine via IMAP. The e-mails I'm generating look like this: FooBar Ltd 123456 http://example.com/Foobar-Ltd-123456.html Project: backend Tracker: Dataerror Beschreibung: This is the description =========================== CLIENT_IP: 192.168.1.215 HTTP_USER_AGENT: mozilla/asdfjköl I try to fetch them into Redmine via this command: rake -f /var/www/projects/redmine/Rakefile redmine:email:receive_imap \ RAILS_ENV="production" host=example.com port=993 ssl=true username=redmine \ password=1234 project=myproject tracker=other \ allow_override=project,tracker,category,priority \ move_on_success=read move_on_failure=failed But the e-mails get moved into the failed folder. I had this setup running some time ago with a different e-mail generator but pretty much the same template, and I can't figure out why it's not working. The permissions seem to be OK too. In order to further debug this issue, I need some logfiles. Are there any logfiles written by this command? Or are there any other suggestions to solve this issue? My environment: danilo@jabba:/var/www/projects/redmine$ RAILS_ENV=production script/about About your application's environment Ruby version 1.8.7 (i486-linux) RubyGems version 1.3.5 Rack version 1.0 Rails version 2.3.5 Active Record version 2.3.5 Active Resource version 2.3.5 Action Mailer version 2.3.5 Active Support version 2.3.5 Application root /var/www/projects/redmine Environment production Database adapter mysql Database schema version 20100819172912

    Read the article

  • Internet Explorer 8 crashes on Citrix, Windows 2003

    - by Workshop Alex
    Difficult to decide where this Q fits best. It's server-related and programming-related. But it's a user-problem so I'll put it here, first... I work on a (Delphi) application that uses an Internet Explorer component to show information to the user. It's not a web application, just a desktop application which creates HTML pages to display them within a browser component. Some of the information on these webpages are retrieved from a web server, while other information is provided "live" by the application itself. It works quite well, but it adds an IE-process (child process) next to my application. And this IE process seems to eat a lot of system resources. For normal users, this is not a real problem, so it's not an issue that I want to fix in the code. But one customer of this application uses it with about 100 users on a Citrix/Windows 2003 environment and they complain about problems with the application. IE8 tends to crash, not show, hang or cause other mayhap. Then again, I've warned them that -officially- I won't support any Citrix environment. But I'm willing to help them to find a solution to fix this, and if need be I could make minor changes to my code to help fix this issie. (If possible.) But basically, I need a solution that any user/administrator on this Citrix environment can follow/use. Any ideas on how to resolve this resource problem?

    Read the article

  • How to tune down the Hyperic built-in postgresql database for a small setup

    - by Svish
    We are testing out Hyperic 4.5.1 in a quite small environment for now. Currently there are just 1-5 agents and there probably won't be any more than 10-15. When I run ps ax there are 20(!) postgres processes running. For a small setup like this, that can't be necessary, can it? I'm a software developer and don't have much experience with setting up servers and such though, so don't really know. Either way, what settings are appropriate for a small Hyperic setup like this? Current, default and untouched configuration file, hqdb/data/postgresql.conf: # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The '=' is optional.) White space may be used. Comments are introduced # with '#' anywhere on a line. The complete list of option names and # allowed values can be found in the PostgreSQL documentation. The # commented-out settings shown in this file represent the default values. # # Please note that re-commenting a setting is NOT sufficient to revert it # to the default value, unless you restart the server. # # Any option can also be given as a command line switch to the server, # e.g., 'postgres -c log_connections=on'. Some options can be changed at # run-time with the 'SET' SQL command. # # This file is read on server startup and when the server receives a # SIGHUP. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, or use "pg_ctl reload". Some # settings, which are marked below, require a server shutdown and restart # to take effect. # # Memory units: kB = kilobytes MB = megabytes GB = gigabytes # Time units: ms = milliseconds s = seconds min = minutes h = hours d = days #--------------------------------------------------------------------------- # FILE LOCATIONS #--------------------------------------------------------------------------- # The default values of these variables are driven from the -D command line # switch or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file # (change requires restart) #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file # (change requires restart) # If external_pid_file is not explicitly set, no extra PID file is written. #external_pid_file = '(none)' # write an extra PID file # (change requires restart) #--------------------------------------------------------------------------- # CONNECTIONS AND AUTHENTICATION #--------------------------------------------------------------------------- # - Connection Settings - #listen_addresses = 'localhost' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all # (change requires restart) port = 9432 # (change requires restart) max_connections = 100 # (change requires restart) # Note: increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). You # might also need to raise shared_buffers to support more connections. #superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directory = '' # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # octal # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security & Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) #password_encryption = on #db_user_namespace = off # Kerberos #krb_server_keyfile = '' # (change requires restart) #krb_srvname = 'postgres' # (change requires restart) #krb_server_hostname = '' # empty string matches any keytab entry # (change requires restart) #krb_caseins_users = off # (change requires restart) # - TCP Keepalives - # see 'man 7 tcp' for details #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0 selects the system default #--------------------------------------------------------------------------- # RESOURCE USAGE (except WAL) #--------------------------------------------------------------------------- # - Memory - shared_buffers = 64MB # min 128kB or max_connections*16kB # (change requires restart) #temp_buffers = 8MB # min 800kB #max_prepared_transactions = 5 # can be 0 or more # (change requires restart) # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 2MB # min 64kB maintenance_work_mem = 32MB # min 1MB #max_stack_depth = 2MB # min 100kB # - Free Space Map - max_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes each # (change requires restart) #max_fsm_relations = 1000 # min 100, ~70 bytes each # (change requires restart) # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 # (change requires restart) #shared_preload_libraries = '' # (change requires restart) # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 0-10000 credits # - Background writer - #bgwriter_delay = 200ms # 10-10000ms between rounds #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round #--------------------------------------------------------------------------- # WRITE AHEAD LOG #--------------------------------------------------------------------------- # - Settings - fsync = on # turns forced synchronization on or off #wal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync #full_page_writes = on # recover from partial page writes #wal_buffers = 64kB # min 32kB # (change requires restart) commit_delay = 100000 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - checkpoint_segments = 10 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 5min # range 30s-1h #checkpoint_warning = 30s # 0 is off # - Archiving - #archive_command = '' # command to use to archive a logfile segment #archive_timeout = 0 # force a logfile segment switch after this # many seconds; 0 is off #--------------------------------------------------------------------------- # QUERY TUNING #--------------------------------------------------------------------------- # - Planner Method Configuration - #enable_bitmapscan = on #enable_hashagg = on #enable_hashjoin = on #enable_indexscan = on #enable_mergejoin = on #enable_nestloop = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on # - Planner Cost Constants - #seq_page_cost = 1.0 # measured on an arbitrary scale #random_page_cost = 4.0 # same scale as above #cpu_tuple_cost = 0.01 # same scale as above #cpu_index_tuple_cost = 0.005 # same scale as above #cpu_operator_cost = 0.0025 # same scale as above #effective_cache_size = 128MB # - Genetic Query Optimizer - #geqo = on #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 # - Other Planner Options - #default_statistics_target = 10 # range 1-1000 #constraint_exclusion = off #from_collapse_limit = 8 #join_collapse_limit = 8 # 1 disables collapsing of explicit # JOINs #--------------------------------------------------------------------------- # ERROR REPORTING AND LOGGING #--------------------------------------------------------------------------- # - Where to Log - log_destination = 'stderr' # Valid values are combinations of # stderr, syslog and eventlog, # depending on platform. # This is used when logging to stderr: redirect_stderr = on # Enable capturing of stderr into log # files # (change requires restart) # These are only used if redirect_stderr is on: log_directory = '../../logs' # Directory where log files are written # Can be absolute or relative to PGDATA log_filename = 'hqdb-%Y-%m-%d.log' # Log file name pattern. # Can include strftime() escapes #log_truncate_on_rotation = off # If on, any existing log file of the same # name as the new log file will be # truncated rather than appended to. But # such truncation only occurs on # time-driven rotation, not on restarts # or size-driven rotation. Default is # off, meaning append to existing files # in all cases. log_rotation_age = 1d # Automatic rotation of logfiles will # happen after that time. 0 to # disable. #log_rotation_size = 10MB # Automatic rotation of logfiles will # happen after that much log # output. 0 to disable. # These are relevant when logging to syslog: #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' # - When to Log - #client_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error #log_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #log_error_verbosity = default # terse, default, or verbose messages #log_min_error_statement = error # Values in order of increasing severity: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # fatal # panic (effectively off) log_min_duration_statement = 10000 # -1 is disabled, 0 logs all statements # and their durations. #silent_mode = off # DO NOT USE without syslog or # redirect_stderr # (change requires restart) # - What to Log - #debug_print_parse = off #debug_print_rewritten = off #debug_print_plan = off #debug_pretty_print = off #log_connections = off #log_disconnections = off #log_duration = off #log_line_prefix = '' # Special values: # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = PID # %t = timestamp (no milliseconds) # %m = timestamp with milliseconds # %i = command tag # %c = session id # %l = session line number # %s = session start timestamp # %x = transaction id # %q = stop here in non-session # processes # %% = '%' # e.g. '<%u%%%d> ' #log_statement = 'none' # none, ddl, mod, all #log_hostname = off #--------------------------------------------------------------------------- # RUNTIME STATISTICS #--------------------------------------------------------------------------- # - Query/Index Statistics Collector - #stats_command_string = on #update_process_title = on stats_start_collector = on # needed for block or row stats # (change requires restart) stats_block_level = on stats_row_level = on stats_reset_on_server_start = off # (change requires restart) # - Statistics Monitoring - #log_parser_stats = off #log_planner_stats = off #log_executor_stats = off #log_statement_stats = off #--------------------------------------------------------------------------- # AUTOVACUUM PARAMETERS #--------------------------------------------------------------------------- #autovacuum = off # enable autovacuum subprocess? # 'on' requires stats_start_collector # and stats_row_level to also be on #autovacuum_naptime = 1min # time between autovacuum runs #autovacuum_vacuum_threshold = 500 # min # of tuple updates before # vacuum #autovacuum_analyze_threshold = 250 # min # of tuple updates before # analyze #autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before # vacuum #autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before # analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum # (change requires restart) #autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for # autovacuum, -1 means use # vacuum_cost_delay #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit #--------------------------------------------------------------------------- # CLIENT CONNECTION DEFAULTS #--------------------------------------------------------------------------- # - Statement Behavior - #search_path = '"$user",public' # schema names #default_tablespace = '' # a tablespace name, '' uses # the default #check_function_bodies = on #default_transaction_isolation = 'read committed' #default_transaction_read_only = off #statement_timeout = 0 # 0 is disabled #vacuum_freeze_min_age = 100000000 # - Locale and Formatting - datestyle = 'iso, mdy' #timezone = unknown # actually, defaults to TZ # environment setting #timezone_abbreviations = 'Default' # select the set of available timezone # abbreviations. Currently, there are # Default # Australia # India # However you can also create your own # file in share/timezonesets/. #extra_float_digits = 0 # min -15, max 2 #client_encoding = sql_ascii # actually, defaults to database # encoding # These settings are initialized by initdb -- they might be changed lc_messages = 'C' # locale for system error message # strings lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C' # locale for number formatting lc_time = 'C' # locale for time formatting # - Other Defaults - #explain_pretty_print = on #dynamic_library_path = '$libdir' #local_preload_libraries = '' #--------------------------------------------------------------------------- # LOCK MANAGEMENT #--------------------------------------------------------------------------- #deadlock_timeout = 1s #max_locks_per_transaction = 64 # min 10 # (change requires restart) # Note: each lock table slot uses ~270 bytes of shared memory, and there are # max_locks_per_transaction * (max_connections + max_prepared_transactions) # lock table slots. #--------------------------------------------------------------------------- # VERSION/PLATFORM COMPATIBILITY #--------------------------------------------------------------------------- # - Previous Postgres Versions - #add_missing_from = off #array_nulls = on #backslash_quote = safe_encoding # on, off, or safe_encoding #default_with_oids = off #escape_string_warning = on #standard_conforming_strings = off #regex_flavor = advanced # advanced, extended, or basic #sql_inheritance = on # - Other Platforms & Clients - #transform_null_equals = off #--------------------------------------------------------------------------- # CUSTOMIZED OPTIONS #--------------------------------------------------------------------------- #custom_variable_classes = '' # list of custom variable class names SELECT * FROM pg_stat_activity; datid | datname | procpid | usesysid | usename | current_query | waiting | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+---------+---------------------------------+---------+-------------------------------+-------------------------------+-------------+------------- 16384 | hqdb | 3267 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.036781+01 | 2011-02-08 15:51:20.02413+01 | 127.0.0.1 | 47892 16384 | hqdb | 3268 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.050994+01 | 2011-02-08 15:51:20.047393+01 | 127.0.0.1 | 47893 16384 | hqdb | 3269 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.056661+01 | 2011-02-08 15:51:20.053201+01 | 127.0.0.1 | 47894 16384 | hqdb | 3271 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.062351+01 | 2011-02-08 15:51:20.058822+01 | 127.0.0.1 | 47895 16384 | hqdb | 3272 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.068328+01 | 2011-02-08 15:51:20.064517+01 | 127.0.0.1 | 47896 16384 | hqdb | 3273 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.07444+01 | 2011-02-08 15:51:20.070755+01 | 127.0.0.1 | 47897 16384 | hqdb | 3274 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.080941+01 | 2011-02-08 15:51:20.076983+01 | 127.0.0.1 | 47898 16384 | hqdb | 3275 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.08741+01 | 2011-02-08 15:51:20.083697+01 | 127.0.0.1 | 47899 16384 | hqdb | 3276 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.093597+01 | 2011-02-08 15:51:20.089977+01 | 127.0.0.1 | 47900 16384 | hqdb | 3277 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:20.133974+01 | 2011-02-08 15:51:20.096149+01 | 127.0.0.1 | 47901 16384 | hqdb | 3308 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:49:27.402197+01 | 2011-02-08 15:51:29.826321+01 | 127.0.0.1 | 47902 16384 | hqdb | 3309 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.572395+01 | 2011-02-08 15:51:29.865243+01 | 127.0.0.1 | 47903 16384 | hqdb | 3310 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.586273+01 | 2011-02-08 15:51:29.874346+01 | 127.0.0.1 | 47904 16384 | hqdb | 3311 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:03.024088+01 | 2011-02-08 15:51:29.883598+01 | 127.0.0.1 | 47905 16384 | hqdb | 3312 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:35.804457+01 | 2011-02-08 15:51:29.892925+01 | 127.0.0.1 | 47906 16384 | hqdb | 3418 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.580207+01 | 2011-02-08 15:51:55.56911+01 | 127.0.0.1 | 47910 16384 | hqdb | 3419 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.59781+01 | 2011-02-08 15:51:55.588609+01 | 127.0.0.1 | 47911 16384 | hqdb | 3422 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:02.668836+01 | 2011-02-08 15:51:55.603076+01 | 127.0.0.1 | 47914 16384 | hqdb | 3421 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.770427+01 | 2011-02-08 15:51:55.603086+01 | 127.0.0.1 | 47913 16384 | hqdb | 3420 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.680785+01 | 2011-02-08 15:51:55.637058+01 | 127.0.0.1 | 47912 16384 | hqdb | 18233 | 10 | hqadmin | SELECT * FROM pg_stat_activity; | f | 2011-02-09 10:49:29.688949+01 | 2011-02-09 10:48:13.031475+01 | | -1 (21 rows)

    Read the article

  • "No more threads can be created in the system" in Network and Sharing Center

    - by Zell Faze
    A while back I noticed on one of our laboratory computers (Windows 7, very little extra software installed) that the network connection icon in the system tray would claim that it had no network connection, even though it did. This issue would go away after the computer was rebooted, but would surface again the next time I looked at the computer (a few days later). Upon opening the Network and Sharing Center I am shown an actual error message, but not one that seems to give me a lot of information about what the problem is. In the place of the usual information about network adapters and whether you are connected to the Internet it simply says: "No more threads can be created in the system." The Event Viewer shows hundreds of events from different services also with the same message. "Volume Shadow Copy Service error: Unexpected error calling routine CoCreateInstance. hr = 0x800700a4, No more threads can be created in the system."; "The WinHTTP Web Proxy Auto-Discovery Service service failed to start due to the following error: A thread could not be created for the service."; "The IP Helper service terminated with the following error: No more threads can be created in the system." As far as I can tell, this message seems to mean that there is some sort of resource leak in Windows where something is creating a large number of threads and those threads are not being killed off? I've tried restarting WMI and several services related to networking, without avail. Can anyone provide more information on what "No more threads can be created in the system" might mean and what I might be able to do to fix the issue? Currently the only solution appears to be restarting.

    Read the article

  • Where can I ask questions that aren't IT questions?

    - by Adam Davis
    Thank you for your confidence in our abilities But have you read this? FAQ We get a lot of interesting technical questions on here, but Server Fault is meant to be first and foremost a resource for system administrators and IT professionals. In other words, Server Fault isn't meant to cater to every computer problem - just those that only exist in or can best be solved in the IT and sysadmin domain Yes, someone here might be able to help you, but you'll find that other forums more focused on your topic can give you a much better answer than a bunch of IT professionals. It's likely that your question will be downvoted, closed, and in some cases marked "offensive." It's not that we hate you, it's just that we like to keep our corn pops separate from our cocoa puffs. In that vein, here are a bunch of other forums where you can get help: (This list contains the top two forums for each category as voted for below.) Programming Stackoverflow Consumer Level Computer Hardware Ars Technica OpenForum Tom's Hardware Forums Computer Software Anandtech Forum Web Design/Hosting/CMS SitePoint Forums Web Hosting Talk Math, Science, Engineering Physics Forums PlanetMath Other/General Ask Metafilter Google Search Mahalo Answers Check out the answers below for even more suggestions! What forums can people go to to ask the questions that are off topic here? Please list only ONE forum per answer so votes can bring the best forums to the top. Return to FAQ Index

    Read the article

  • Stange stream of HTTP GET requests in apache logs, from amazon ec2 instances

    - by Alexandre Boeglin
    I just had a look at my apache logs, and I see a lot of very similar requests: GET / HTTP/1.1 User-Agent: curl/7.24.0 (i386-redhat-linux-gnu) libcurl/7.24.0 \ NSS/3.13.5.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.2 Host: [my_domain].org Accept: */* there's a steady stream of those, about 2 or 3 per minute; they all request the same domain and resource (there are slight variations in user agent version numbers); they come form a lot of different IPv4 and IPv6 addresses, in blocs that belong to amazon ec2 (in Singapore, Japan, Ireland and the USA). I tried to look for an explanation online, or even just similar stories, but couldn't find any. Has anyone got a clue as to what this is? It doesn't look malicious per say, but it's just annoying me, and I couldn't find any more information about it. I first suspected it could be a bot checking if my server is still up, but: I don't remember subscribing to such a service; why would it need to check my site twice every minute; why doesn't it use a clearly identifying fqdn. Or, should I send this question to amazon, via their abuse contact? Thanks!

    Read the article

  • "Could not claim interface on camera: -6" when trying to connect usb camera (Kinect)

    - by rzetterberg
    I have installed the freenect library from openkinect.org. With that library there is a demo application which you can run from the terminal to test out the Kinect. However when I run this command I get the following output: richard@behemoth:~$ sudo freenect-glview Kinect camera test Number of devices found: 1 Could not claim interface on camera: -6 Could not open device This particular error is thrown by the library libusb by the function libusb_claim_interface and the error -6 corresponds to the LIBUSB_ERROR_BUSY. So my guess is that it has something to do with mounting the usb, rather than specifically the freenect library or the Kinect itself. So my question is how can I find out what resource is using this interface and how can I free it so that I can access it? Edit: What I have tried so far (just to be sure): Rebooted Plugged-out, plugged-in Tried different usb ports Restarted udev Additional information that might be useful: /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=1c73f217-ac8d-451b-8390-7a680628a856 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=bb49bd29-07ec-45a0-bbab-46fb8362b06b none swap sw 0 0 sudo uname -r: Linux behemoth 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10"

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >