Search Results

Search found 21926 results on 878 pages for 'drop down'.

Page 149/878 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • View Weather Underground Forecasts in Google Chrome

    - by Asian Angel
    If you like a simple straightforward interface for keeping up with weather forecasts then join us as we look at the Weather Underground extension for Google Chrome. Weather Underground in Action As soon as you click on the “Toolbar Icon” you will need to enter a location. Keep in mind that you will need to enter the “city and country” if using that option. Going with less information will yield an “error”. Note: The extension did not work for some Asian locations during our tests. In honor of the Olympics we chose Vancouver, Canada. You can hover over the “Toolbar Button” to see the current conditions or click to view the current day’s conditions, the current day’s forecast, and the forecast for the following three days. It is a simple straightforward interface. Note: There are no options to worry with. Clicking on the “Detailed Forecast Link” in the drop-down window will take you to the Weather Underground webpage for your location. Clicking on the “Weather Underground Link” in the drop-down window will take you to the Weather Underground U.S. Homepage. Additional Weather Underground Fun Since we were focusing on Weather Underground we have an extra bit of fun for you. If you love being able to view a “large scale” map of your location with current conditions and forecast combined then you might want to have a look at Weather Underground’s “wxmap webpage”. Using the link below you can access the basic starting page where you will be asked to enter your location. Once you have entered the information you will see the default “Terrain View” for your location and a “Current Conditions & Forecast Window” in the lower left corner. You can modify how your map looks by choosing from “Temperature, Precipitation, Clouds, Satellite, Hybrid, & Terrain” views. Going full screen in your browser with this gives your monitor a wonderful and unique look that will have your family & friends asking you how you did it. Note: Terrain View shown here. Clicking on the “Settings Link” in the upper left corner will let you tweak your map view very nicely. Conclusion If you love using Weather Underground for your weather forecasts then you can add a “double dose” of goodness to your browser. Links Download the Weather Underground extension (Google Chrome Extensions) Access the Full Screen Weather Underground Map & Forecast for your area Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeMonitor the Weather for Your Location in ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • How to Create Views for All Tables with Oracle SQL Developer

    - by thatjeffsmith
    Got this question over the weekend via a friend and Oracle ACE Director, so I thought I would share the answer here. If you want to quickly generate DDL to create VIEWs for all the tables in your system, the easiest way to do that with SQL Developer is to create a data model. Wait, why would I want to do this? StackOverflow has a few things to say on this subject… So, start with importing a data dictionary. Step One: Open of Create a Model In SQL Developer, go to View – Data Modeler – Browser. Then in the browser panel, expand your design and create a new Relational Model. Step Two: Import your Data Dictionary This is a fancy way of saying, ‘suck objects out of the database into my model’ This will open a wizard to connect, select your schema(s), objects, etc. Once they’re in your model, you’re ready to cook with gas I’m using HR (Human Resources) for this example. You should end up with something that looks like this. Our favorite HR model Now we’re ready to generate the views! Step Three: Auto-generate the Views Go to Tools – Data Modeler – Table to View Wizard. I don’t want all my tables included, and I want to change the naming standard Decide if you want to change the default generated view names By default the views will be created as ‘V_TABLE_NAME.’ If you don’t like the ‘V_’ you can enter your own. You also can reference the object and model name with variables as shown in the screenshot above. I’m going to go with something a little more personal. The views are the little green boxes in the diagram Can’t find your views? They should be grouped together in your diagram. Don’t forget to use the Navigator to easily find and navigate to those model diagram objects! Step Four: Generate the DDL Ok, let’s use the Generate DDL button on the toolbar. Un-check everything but your views If you used a prefix, take advantage of that to create a filter. You might have existing views in your model that you don’t want to include, right? Once you click ‘OK’ the DDL will be generated. -- Generated by Oracle SQL Developer Data Modeler 4.0.0.825 -- at: 2013-11-04 10:26:39 EST -- site: Oracle Database 11g -- type: Oracle Database 11g CREATE OR REPLACE VIEW HR.TJS_BLOG_COUNTRIES ( COUNTRY_ID , COUNTRY_NAME , REGION_ID ) AS SELECT COUNTRY_ID , COUNTRY_NAME , REGION_ID FROM HR.COUNTRIES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_EMPLOYEES ( EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID FROM HR.EMPLOYEES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOBS ( JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY ) AS SELECT JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY FROM HR.JOBS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOB_HISTORY ( EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID FROM HR.JOB_HISTORY ; CREATE OR REPLACE VIEW HR.TJS_BLOG_LOCATIONS ( LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID ) AS SELECT LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID FROM HR.LOCATIONS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_REGIONS ( REGION_ID , REGION_NAME ) AS SELECT REGION_ID , REGION_NAME FROM HR.REGIONS ; -- Oracle SQL Developer Data Modeler Summary Report: -- -- CREATE TABLE 0 -- CREATE INDEX 0 -- ALTER TABLE 0 -- CREATE VIEW 6 -- CREATE PACKAGE 0 -- CREATE PACKAGE BODY 0 -- CREATE PROCEDURE 0 -- CREATE FUNCTION 0 -- CREATE TRIGGER 0 -- ALTER TRIGGER 0 -- CREATE COLLECTION TYPE 0 -- CREATE STRUCTURED TYPE 0 -- CREATE STRUCTURED TYPE BODY 0 -- CREATE CLUSTER 0 -- CREATE CONTEXT 0 -- CREATE DATABASE 0 -- CREATE DIMENSION 0 -- CREATE DIRECTORY 0 -- CREATE DISK GROUP 0 -- CREATE ROLE 0 -- CREATE ROLLBACK SEGMENT 0 -- CREATE SEQUENCE 0 -- CREATE MATERIALIZED VIEW 0 -- CREATE SYNONYM 0 -- CREATE TABLESPACE 0 -- CREATE USER 0 -- -- DROP TABLESPACE 0 -- DROP DATABASE 0 -- -- REDACTION POLICY 0 -- -- ERRORS 0 -- WARNINGS 0 You can then choose to save this to a file or not. This has a few steps, but as the number of tables in your system increases, so does the amount of time this feature can save you!

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • low-cost RAID NAS for home use?

    - by gravyface
    Have a noisy, power-hungry Pentium 4 based Ubuntu server that I want to replace with a nice, low-power mini-ITX/Intel Atom-based machine to do my network services (DHCP, DNS, IPSec, Web/mail, FTP, etc.) and am thinking of a (hopefully) equally-low powered NAS using NFS over GbE with at least 1 TB space and a RAID 5 (preferred) or RAID 0 (likely) configuration for redundancy with a couple of spare disks I can swap in as needed down the road. Would I be better off getting a full sized ATX mobo/case and configuring the RAID internally? I really want to keep power consumption down as much as possible as I leave my home server up 24/7.

    Read the article

  • low-cost RAID NAS for home use?

    - by gravyface
    Have a noisy, power-hungry Pentium 4 based Ubuntu server that I want to replace with a nice, low-power mini-ITX/Intel Atom-based machine to do my network services (DHCP, DNS, IPSec, Web/mail, FTP, etc.) and am thinking of a (hopefully) equally-low powered NAS using NFS over GbE with at least 1 TB space and a RAID 5 (preferred) or RAID 0 (likely) configuration for redundancy with a couple of spare disks I can swap in as needed down the road. Would I be better off getting a full sized ATX mobo/case and configuring the RAID internally? I really want to keep power consumption down as much as possible as I leave my home server up 24/7.

    Read the article

  • List of Upcoming Appearances

    - by Chris Gardner
    Greetings. I know I have been in work sponsored hiding lately. We are working furiously on a beta project to secure a contract, and I can't really talk about it yet. Hopefully, the contracts will be soon signed. Not only will we then have money, but I can talk about all this really cool tech with which I have been playing. However, since the contract is not signed, I need to bring you people up to date with where I will be during the summer. Let's face it, you can't be a speaker / blogger without pandering to shameless self-promotion. First, I will, once again, be staffing the Hands-on-Labs at TechEd North America. Unfortunately, TechEd North America is already sold out for this year. However, if you're already going, drop by the labs and say Hi. Also, keep an eye on Twitter to track me throughout the event. Also, look for a post in a few hours with my specific picks for what content I'm looking forward to seeing this year. Immediately following TechEd North America, I will be flying into Knoxville to speak at CodeStock. I will be presenting my introduction and intermediate Xbox 360 development talks. There are a TON of great content at CodeStock this year, but there are only about 50 tickets left. After that whirlwind of work, things settle for awhile. That means I'm available to speak at your User Group, luncheon, bowling league, birthday party, anniversary, or bat mitzvah. Mid August brings us to That Conference. This one is going to be a blast. If you haven't heard of That Conference yet, you should really check it out. This will also be my introduction and intermediate Xbox 360 development talks. This is a new conference, and it looks like it will be a great one. Finally, we will turn our attention to DevLink. DevLink has the distinction of picking up my newest talk, Creating Stereoscopic 3D Graphics in XNA. On top of that, I'm giving an general Xbox 360 and Windows Phone 7 talk. DevLink has added an new "XNA and Kinect" track, so there will me a ton of great game content. That should bring us through the summer. As I solidify the Stereoscopic talk, look for some content on that to creep up on here. I will say it's the first topic I've played around with that is easier in 3D than 2D. Also, the organizers of Alabama Code Camp are still trying to reschedule the event. When that happens, I'll get that information out. Also, we are looking to expand our development team. If you are interested in working for / with me, keep an eye on the T & W Operations website. I know we're immediately looking for a junior level developer, but I think a few higher level position may come up soon. You MUST apply through the website, but drop me a personal line if you do apply. I'll keep an eye out for the application.

    Read the article

  • Copyrights concerning code snippets and larger amounts of code

    - by JustcallmeDrago
    I am designing a public code repository. Users will be allowed to post and edit whatever amount of code they want, from code snippets to entire multi-file projects. I have a few major legal concerns about this: Not getting sued/shut down - I feel the site would be a much easier target than tracking down an individual user to sue. I have looked around a bit and see links to legal info in the footer of each page is common. What specific things should I do--and what does does a site such as YouTube (which I see copyrighted material on all the time) do--for protection? Citing sources and editing sourced code - If a user wants to post code that isn't theirs, what concerns/safeguards should I have? Will a link suffice, and what do I need further to allow the code to be edited (to improve it for example)? What can happen if a user posts copyrighted code without citing it? Large chunks of code - What legal differences should I look out for as the amount grows? Not having a mess of licenses for the site - I would like to have a single license (like RosettaCode) that keeps things simple for interaction on the site. I want the code to be postable and editable. I have looked into StackOverflow's CreativeCommons license a little and it says that If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one. And on RosettaCode: All software found on Rosetta Code should be considered potentially hazardous. Use at your own risk. Be aware that all code on Rosetta Code is under the GNU Free Documentation License, as are any edits made by contributors. See Rosetta Code:Copyrights for details. What other licenses are like this? Commercializing the site - In what ways can I and can't I make money off of a site that contains code like this? All code will be publicly visible. Initial thoughts are having ads or making money by charging for advanced features.

    Read the article

  • Assistance in setting up new APC Smart-UPS RT on a new VMware enviroment

    - by user38085
    I'm new to the realm of setting up a APC Smart-UPS RT 8000VA UPS with a management network card (AP9618). The project calls for the upgrade of the firmware for the network card to the newest and greatest. It also calls for the Powerchute Business Software to be installed with notifications setup per email for temperature, shutdown, and battery low. I know I'll have to use the serial cable to flash the firmware and install the software on one server 2003 box. Also on that server I'll have to install the software and setup the GUI (IP address) interface. Whats confusing the most is the whole process, and steps to use without taking down the network, which would be very bad. In flashing the firmware does it take down the UPS? Do i have to run BOOTP commands to setup the network card? Also no agents will be used on any of the VWware OS's and no SNMP trap will be used.

    Read the article

  • Secure Government Series Part 3

    - by Naresh Persaud
    Secure Government Training SeriesSafeguarding Government CyberspaceClick here, to register for the live webcast. Cybersecurity threats represent one of the most serious national security, public safety, and economic challenges. While technologies empower government to lead and innovate, they also enable those who seek to disrupt and destroy progress. Cloud computing, mobile devices and social networks help government reduce costs and streamline service delivery, but also introduce heightened security vulnerabilities. How can government organizations keep pace with heightened service delivery demands and advancements in technology without compromising security? Join us November 28th for a webcast as part of the “Secure Government Training Series” to learn about a security portfolio that helps organizations mitigate cyber attacks by providing Full-spectrum cybersecurity capabilities that harden the data tier, lock down sensitive information, and provide access controls and visibility for frequently targeted systems.Gain insights to an integrated security framework and overall strategy for preventing attacks that will help your organization: Deploy resilient IT infrastructure Catalog and classify sensitive and mission-critical data Secure the enterprise data tier and lock down trusted insider privileges at all levels Automate and centralize enterprise auditing Enable automated alerting and situational awareness of security threats and incidents For more information, access the Secure Government Resource Center or to speak with an Oracle representative, please call1.800.ORACLE1. LIVE Webcast Safeguarding Government Cyberspace Date: Wednesday, November 28th, 2012 Time: 2:00 p.m. ET Visit the Secure Government Resource CenterClick here for information on enterprise security solutions that help government safeguard information, resources and networks. ACCESS NOW Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices | Privacy Statement

    Read the article

  • Getting Started with Chart control in ASP.Net 4.0

    - by sreejukg
    In this article I am going to demonstrate the Chart control available in ASP.Net 4 and Visual Studio 2010. Most of the web applications need to generate reports for business users. The business users are happy to view the results in a graphical format more that seeing it in numbers. For the purpose of this demonstration, I have created a sales table. I am going to create charts from this sale data. The sale table looks as follows I have created an ASP.Net web application project in Visual Studio 2010. I have a default.aspx page that I am going to use for the demonstration. First I am going to add a chart control to the page. Visual Studio 2010 has a chart control. The Chart Control comes under the Data Tab in the toolbox. Drag and drop the Chart control to the default.aspx page. Visual Studio adds the below markup to the page. <asp:Chart ID="Chart1" runat="server"></asp:Chart> In the designer view, the Chart controls gives the following output. As you can see this is exactly similar to other server controls in ASP.Net, and similar to other controls under the data tab, Chart control is also a data bound control. So I am going to bind this with my sales data. From the design view, right click the chart control and select “show smart tag” Here you need so choose the Data source property and the chart type. From the choose data source drop down, select new data source. In the data source configuration wizard, select the SQL data base and write the query to retrieve the data. At first I am going to show the chart for amount of sales done by each sales person. I am going to use the following query inside sqldatasource select command. “SELECT SUM(SaleAmount) AS Expr1, salesperson FROM SalesData GROUP BY SalesPerson” This query will give me the amount of sales achieved by each sales person. The mark up of SQLDataSource is as follows. <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:SampleConnectionString %>" SelectCommand="SELECT SUM(SaleAmount) as amount, SalesPerson FROM SalesData GROUP BY SalesPerson"></asp:SqlDataSource> Once you selected the data source for the chart control, you need to select the X and Y values for the columns. I have entered salesperson in the X Value member and amount in the Y value member. After modifications, the Chart control looks as follows Click F5 to run the application. The output of the page is as follows. Using ASP.Net it is much easier to represent your data in graphical format. To show this chart, I didn’t even write any single line of code. The chart control is a great tool that helps the developer to show the business intelligence in their applications without using third party products. I will write another blog that explore further possibilities that shows more reports by using the same sales data. If you want to get the Project in zipped format, post your email below.

    Read the article

  • Why are data structures so important in interviews?

    - by Vamsi Emani
    I am a newbie into the corporate world recently graduated in computers. I am a java/groovy developer. I am a quick learner and I can learn new frameworks, APIs or even programming languages within considerably short amount of time. Albeit that, I must confess that I was not so strong in data structures when I graduated out of college. Through out the campus placements during my graduation, I've witnessed that most of the biggie tech companies like Amazon, Microsoft etc focused mainly on data structures. It appears as if data structures is the only thing that they expect from a graduate. Adding to this, I see that there is this general perspective that a good programmer is necessarily a one with good knowledge about data structures. To be honest, I felt bad about that. I write good code. I follow standard design patterns of coding, I do use data structures but at the superficial level as in java exposed APIs like ArrayLists, LinkedLists etc. But the companies usually focused on the intricate aspects of Data Structures like pointer based memory manipulation and time complexities. Probably because of my java-ish background, Back then, I understood code efficiency and logic only when talked in terms of Object Oriented Programming like Objects, instances, etc but I never drilled down into the level of bits and bytes. I did not want people to look down upon me for this knowledge deficit of mine in Data Structures. So really why all this emphasis on Data Structures? Does, Not having knowledge in Data Structures really effect one's career in programming? Or is the knowledge in this subject really a sufficient basis to differentiate a good and a bad programmer?

    Read the article

  • Trouble connecting a Ubuntu system to IPv6 tunnel over NAT

    - by John Millikin
    I'm trying to set up an IPv6 tunnel, via Hurricane Electric's tunnel-broker service. I've configured my system using their example commands: # $ipv4a = tunnel server's IPv4 IP # $ipv4b = user's IPv4 IP # $ipv6a = tunnel server's side of point-to-point /64 allocation # $ipv6b = user's side of point-to-point /64 allocation ip tunnel add he-ipv6 mode sit remote $ipv4a local $ipv4b ttl 255 ip link set he-ipv6 up ip addr add $ipv6b dev he-ipv6 ip route add ::/0 dev he-ipv6 And have configured my desktop to be in my NAT router's DMZ. The router is running Tomato firmware. But I can't ping any IPv6 services: $ ping6 -I he-ipv6 '2001:470:1f04:454::1' PING 2001:470:1f04:454::1(2001:470:1f04:454::1) from 2001:470:1f04:454::2 he-ipv6: 56 data bytes From 2001:470:1f04:454::2 icmp_seq=1 Destination unreachable: Address unreachable From 2001:470:1f04:454::2 icmp_seq=2 Destination unreachable: Address unreachable I can ping my local address: $ ping6 -I he-ipv6 '2001:470:1f04:454::2' PING 2001:470:1f04:454::2(2001:470:1f04:454::2) from 2001:470:1f04:454::2 he-ipv6: 56 data bytes 64 bytes from 2001:470:1f04:454::2: icmp_seq=1 ttl=64 time=0.037 ms 64 bytes from 2001:470:1f04:454::2: icmp_seq=2 ttl=64 time=0.039 ms I don't know much about routing, but results I found online suggested the output of ip -6 route and ip addr could be useful: $ ip -6 route 2001:470:1f04:454::/64 via :: dev he-ipv6 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 4294967295 fe80::/64 dev virbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 via :: dev he-ipv6 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 4294967295 default dev he-ipv6 metric 1024 mtu 1480 advmss 1420 hoplimit 4294967295 $ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 100 link/ether 00:1c:c0:a1:98:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.1.10/24 brd 192.168.1.255 scope global eth1 inet6 fe80::21c:c0ff:fea1:98b2/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 36:4c:33:ab:0d:c6 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 inet6 fe80::344c:33ff:feab:dc6/64 scope link valid_lft forever preferred_lft forever 4: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:76:62:6e:65:74 brd ff:ff:ff:ff:ff:ff 5: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 7e:29:5e:7c:ba:93 brd ff:ff:ff:ff:ff:ff 6: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 7: he-ipv6@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN link/sit 24.130.225.239 peer 72.52.104.74 inet6 2001:470:1f04:454::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::1882:e1ef/128 scope link valid_lft forever preferred_lft forever

    Read the article

  • Unable to suspend with FGLRX enabled

    - by schmmd
    I am unable to suspend in a fully updated (as of yesterday, 29 March 2011) Ubuntu 10.10 installation (kernel 2.6.35-28). Following is a list of some of my hardware: Motherboard: Gigabyte GA-X58A-UD3R Video: Radeon HD-567X-YNF3 Initially, when I went to either Suspend or Hibernate the machine would almost go into suspend, but it would never power down. Instead it would bounce back to the login screen. This was due to a problem with the USB3 ports being unable to suspend (noticed this in /var/log/kern.log). Disabling the USB3 ports in the BIOS fixed this issue. Now Suspend and Hibernate power down the machine. It successfully awakens from Hibernate. However, it will not return from suspend. The mouse and keyboard are not powered and the monitor has no signal. These devices are still not powered after a restart. I must power-cycle the machine. The pm-suspend log ends after it states that it entered suspend (i.e. there is no information any resume code running). I discovered that acpitool -s suspends the machine and resumes successfully exactly once. The second time the machine will not resume. I am not sure how these two tools handle suspend differently. UPDATE: the problem was introduced somewhere between 2.6.35-22 and 2.6.35-28. I have both kernels install presently. Suspend works fine with 2.6.35-22 but not with 2.6.38-28.

    Read the article

  • OCS 2007 R2 Client not syncing Address book

    - by Noah
    I've checked online for most solution for this issue, but nothing seems to be working. When I check the log files on our OCS 2007 R2 server, it is identifying 25 users in the address book. However, when I try and force a sync with the clients, they do not update. I can find the users if I search for them, but they are not coming down by themselves. Is there anything I can check or force from the client side? There is no address book file locally to delete and re-force down.

    Read the article

  • SQL SERVER – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    After having excellent response to my quiz – Why SELECT * throws an error but SELECT COUNT(*) does not?I have decided to ask another puzzling question to all of you. I am running this test on SQL Server 2008 R2. Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Note: Auto Update Statistics and Auto Create Statistics for database is TRUE Expected Result – Statistics should be updated – SQL SERVER – When are Statistics Updated – What triggers Statistics to Update Now the question is why the statistics are not updated? The common answer is – we can update the statistics ourselves using UPDATE STATISTICS TableName WITH FULLSCAN, ALL However, the solution I am looking is where statistics should be updated automatically based on algorithm mentioned here. Now the solution is to ____________________. Vinod Kumar is not allowed to take participate over here as he is the one who has helped me to build this puzzle. I will publish the solution on next week. Please leave a comment and if your comment consist valid answer, I will publish with due credit. Here is the script to reproduce the scenario which I mentioned. -- Execution Plans Difference -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table - none listed sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -------------------------------------------------------------- -- Round 2 -- Insert Ten Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -- You will notice that Statistics are still updated with 1000 rows -- Clean up Database DROP TABLE ExecTable GO USE MASTER GO ALTER DATABASE SampleDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE SampleDB GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • Why do marketing employees get their own office, yet programmers are jammed in a room as many as possible?

    - by TheImirOfGroofunkistan
    I don't understand why many (many) companies treat software developers like they are assembly line workers making widgets. Joel Spolsky has a great example of the problems this creates: With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed. Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds). Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh! Quote Link More Spolsky on Offices Why don't managers and owner's see this?

    Read the article

  • Why are marketing employees, product managers, etc. deserving of their own office, yet programmers are jammed in a room as many as possible?

    - by TheImirOfGroofunkistan
    I don't understand why many (many) companies treat software developers like they are assembly line workers making widgets. Joel Spolsky has a great example of the problems this creates: With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed. Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds). Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh! Quote Link More Spolsky on Offices Why don't managers and owner's see this?

    Read the article

  • Silverlight Cream for June 08, 2010 -- #877

    - by Dave Campbell
    In this Issue: Miroslav Miroslavov, Chris Klug, Beau, Christian Schormann(-2-), Dan Wahlin, Pete Brown, Michael S. Scherotter, Philipp Sumi, Andy Wigley, and Phil Middlemiss. Shoutouts: Mark Tucker set about learning Caliburn, and in the process is writing a Caliburn Book: Chapters 1-3 Jesse Liberty has a great link-laden post up about why we should all be learning/using Blend: Why Developers Should, Must, Do Care About The New Expression Blend be sure to read what he says about WP7 development, however! Charlie Kindel announced an Install problem with the Developer Tools CTP Refresh and the WP7 tools... check this out if you're having problems. John Papa has a good post up on the happenings yesterday: Expression Studio 4 Launch of Blend, SketchFlow, Encoder and More! Erik Mork & Company's latest "This Week in Silverlight" is titled First Drop: Prism v4 – First Drop is Available From SilverlightCream.com: Animated navigation between Pages Miroslav Miroslavov has Part 8 of his "Silverlight in Action" series up, detailing cool things from the CompleteIT site... this one is on Animated navigation between pages. Subtitling videos Chris Klug got a gig adding subtitles to videos for Microsoft (sweet) ... and no, not *that* kind of subtitles... read how he approached the final solution. Silverlight Watermark TextBox I'm not sure we can have too many Watermark TextBoxes, and neither does Beau , who sent me a link to this one... give it a dance and decide. Blend 4: Collaborative SketchFlow Feedback with SharePoint With the new Blend release, Christian Schormann has a post up describing the lashup to Sharepoint for sharing Sketchflow and getting feedback. New Utility, Links, and Tutorials for Path-Based Layout Christian Schormann also has a collection of resources for Path-Based Layouts, including a utility "that lets you apply a whole bunch of position-specific effects without having to write any code"... lots of links to resources here. Tales from the Trenches – Building a Real-World Silverlight Line of Business Application Dan Wahlin draws on his recent experience and lays out some of the fun and pitfalls of building LOB apps in Silverlight... WCF, MVVM, slides, and code included WPF (and Silverlight): Choose your Fonts and Text Rendering Options Wisely Pete Brown has a great post up on using fonts wisely across multiple platforms... lots of info and good discussion in the comments as well. Ball Watch USA Remember the awesome watch Michael S. Scherotter did in Silverlight 1 and then converted to Updated Ball Trainmaster Cannonball Watch to Silverlight 2? Well... there's now a contest underfoot and 8 videos to help you get started... all good stuff, and good luck! ... Michael has a post up about the contest: Enter to Win a Ball Watch by Creating One in Silverlight Announcing Sketchables – Rapid Mockup Creation with SketchFlow By way of Jesse Libertyhttp://jesseliberty.com/2010/06/08/why-developers-should-must-do-care-about-the-new-expression-blend/, this is a cool production by Philipp Sumi about a simple mockup framework he's created. Perst - a database for Windows Phone 7 Silverlight I think one of my first comments to Michael Washington back at the MVP Summit 2010 was that we'd need a database engine, and too cool, but we've got one, Andy Wigley discusses Perst in this post... to save you some time, here's the Perst site A Chrome and Glass Theme - Part 7 Phil Middlemiss has part 7 of his great theme-building series up... this time he's giving the accordian control a once-over. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 3)

    - by Sadequl Hussain
    Continuing from Part 2 of the Database Migration Checklist series: Step 10: Full-text catalogs and full-text indexing This is one area of SQL Server where people do not seem to take notice unless something goes wrong. Full-text functionality is a specialised area in database application development and is not usually implemented in your everyday OLTP systems. Nevertheless, if you are migrating a database that uses full-text indexing on one or more tables, you need to be aware a few points. First of all, SQL Server 2005 now allows full-text catalog files to be restored or attached along with the rest of the database. However, after migration, if you are unable to look at the properties of any full-text catalogs, you are probably better off dropping and recreating it. You may also get the following error messages along the way: Msg 9954, Level 16, State 2, Line 1 The Full-Text Service (msftesql) is disabled. The system administrator must enable this service. This basically means full text service is not running (disabled or stopped) in the destination instance. You will need to start it from the Configuration Manager. Similarly, if you get the following message, you will also need to drop and recreate the catalog and populate it. Msg 7624, Level 16, State 1, Line 1 Full-text catalog ‘catalog_name‘ is in an unusable state. Drop and re-create this full-text catalog. A full population of full-text indexes can be a time and resource intensive operation. Obviously you will want to schedule it for low usage hours if the database is restored in an existing production server. Also, bear in mind that any scheduled job that existed in the source server for populating the full text catalog (e.g. nightly process for incremental update) will need to be re-created in the destination. Step 11: Database collation considerations Another sticky area to consider during a migration is the collation setting. Ideally you would want to restore or attach the database in a SQL Server instance with the same collation. Although not used commonly, SQL Server allows you to change a database’s collation by using the ALTER DATABASE command: ALTER DATABASE database_name COLLATE collation_name You should not be using this command for no reason as it can get really dangerous.  When you change the database collation, it does not change the collation of the existing user table columns.  However the columns of every new table, every new UDT and subsequently created variables or parameters in code will use the new setting. The collation of every char, nchar, varchar, nvarchar, text or ntext field of the system tables will also be changed. Stored procedure and function parameters will be changed to the new collation and finally, every character-based system data type and user defined data types will also be affected. And the change may not be successful either if there are dependent objects involved. You may get one or multiple messages like the following: Cannot ALTER ‘object_name‘ because it is being referenced by object ‘dependent_object_name‘. That is why it is important to test and check for collation related issues. Collation also affects queries that use comparisons of character-based data.  If errors arise due to two sides of a comparison being in different collation orders, the COLLATE keyword can be used to cast one side to the same collation as the other. Continues…

    Read the article

  • /var/run/httpd.pid missing...

    - by user38043
    Recently one of our web servers httpd stopped working and I haven't been able to find the problem. Today I sat down and went through every directory in the httpd.conf and have found an issue. the /var/run/httpd.pid is missing from the folder. All other files are there and seem to be fine. I cannot create a new file with the same name in vi and I have no idea what could have caused this. I imagine it was caused by a cold reboot at some stage as no other extraordinary processes have been run on this server at the time it went down. I am running CentOS 3. How can I reinstate this file?

    Read the article

  • Drupal CMS most Stable for High Traffic

    - by Aditi
    Drupal users have high satisfaction with Drupal compared to the Joomla users, for a number of reasons. If you are thinking of  choosing a high performance platform to run your high traffic website.. Drupal Installation is your forte! Overload Scenario Drupal is scalable high performance CMS and is stable under heavy load. If your server is pushed beyond its capacity, Drupal shuts off gracefully and doesn’t crash. As soon as the server is back within its traffic capability, Drupal handles all requests smoothly again. For example if your dedicated server can handle a maximum of 50,000 visits a day, and on lucky days when your news created the buzz in social media and your traffic rose to 70,000 on one day, then your server will be overloaded and usually it crashes causing permanent damage to your database at times.. But if you have used Drupal CMS it closes down gracefully an as soon as traffic goes down to within the server’s capacity, the Drupal running site accepts all requests again. Extensibility Drupal users know that their add-ons integrate better with the core, and their framework makes it easier to extend their CMS’s capabilities.. which makes an extended version of it quite stable unlike Joomla, which loses its strength if you have plenty of plugins & heavy customizations running. Any CMS with number of plugins makes the content complex and reduces your ability to handle high traffic requests. Accessibility Management or ACL Chances are if you are high traffic website, you may have various users & content contributors. ACL means group roles that is assigning people out of the various registered user levels and allocating many kinds of privileges. The most common example is the ability to see or edit a section or selected pages. This efficient feature of Drupal makes it a class apart than other CMSs out there.

    Read the article

  • Methodology behind fetching large XML data sets in pieces

    - by Jerry Dodge
    I am working on an HTTP Server in Delphi which simply sends back a custom XML dataset. I am not following any type of standard formatting, such as SOAP. I have the system working seamlessly, except one small flaw: When I have a very large dataset to send back to the client, it might take up to 2 minutes for all the data to be transferred. The HTTP Server I'm building is essentially an XML Data based API around a database, implementing the common business rule - therefore, the requests are specific to the data behind the system. When, for example, I fetch a large set of product data, I would like to break this down and send it back piece by piece. However, a single HTTP request calls for a single response. I can't necessarily keep feeding the client with multiple different XML packets unless the client explicitly requests it. I don't have any session management, but rather an API Key. I know if I had sessions, I could keep-alive a dataset temporarily for a client, and they could request bits and pieces of it. However, without session management, I would have to execute the SQL query multiple times (for each chunk of data), and in the mean-time, if that data changes, the "pages" might get messed up, therefore causing items to show on the wrong pages, after navigating to a different page. So how is this commonly handled? What's the methodology behind breaking down a large XML dataset into chunks to save the load?

    Read the article

  • xubuntu 12.04 totally refuses to boot after install with wubi on old computer. any tips or suggestions on things i can do to fix this problem?

    - by WizardinBlack
    so, i downloaded xubuntu 12.04 32-bit (not alt version) from the xubuntu.org website and placed the .zip into a folder named XUBUNTU on my desktop. i then extracted the files to that folder and ran wubi. i selected xubuntu from the drop down and clicked install. i let it do its thing and then chose the option to reboot when prompted. this is when i went to my girlfriends house for the day and left my laptop (dell latitude 120L) to finish the install. i came back home (about 8 hours later) to find my computer on the windows 7 log in screen when i rebooted my computer once more. i booted xubuntu just fine and things seemed to work perfectly. i turned off the pc and went to bed. next morning i booted xubuntu again without problems. i rebooted and went to windows, then next time i tried booting xubuntu in the middle of the splash screen it went black and the cpu light stopped blinking. i force shut down the computer and tried booting again when the same thing happened (instead of going black the splash just froze). i went back to windows and uninstalled and reintalled the os and havent beej able to boot it since. i even tried booting lubuntu with the same problem. please help!

    Read the article

  • Coldfusion autorestart

    - by Comcar
    Coldfusion is automatically restarting, a lot. It comes in waves, everything seems fine for a while then the server struggles for a few minutes, restarts a lot then settles down again. I have Fusion Reactor installed, but when CF goes down FR stops logging so it's not really helping. Looking through the archived logs just shows gaps in the logs. These are all the occourances of the phrase "Coldfusion started" today. [root@server2 logs]# grep -i "Coldfusion started" server.log | grep "11/27/12" "Information","main","11/27/12","01:49:35",,"ColdFusion started" "Information","main","11/27/12","01:50:46",,"ColdFusion started" "Information","main","11/27/12","01:52:39",,"ColdFusion started" "Information","main","11/27/12","01:54:08",,"ColdFusion started" "Information","main","11/27/12","01:55:12",,"ColdFusion started" "Information","main","11/27/12","01:56:29",,"ColdFusion started" "Information","main","11/27/12","01:57:36",,"ColdFusion started" "Information","main","11/27/12","01:58:57",,"ColdFusion started" "Information","main","11/27/12","01:59:56",,"ColdFusion started" "Information","main","11/27/12","02:01:38",,"ColdFusion started" "Information","main","11/27/12","02:03:11",,"ColdFusion started" "Information","main","11/27/12","02:04:41",,"ColdFusion started" "Information","main","11/27/12","02:07:53",,"ColdFusion started" "Information","main","11/27/12","02:10:45",,"ColdFusion started" "Information","main","11/27/12","02:11:49",,"ColdFusion started" "Information","main","11/27/12","02:13:09",,"ColdFusion started" "Information","main","11/27/12","02:14:18",,"ColdFusion started" "Information","main","11/27/12","02:15:44",,"ColdFusion started" "Information","main","11/27/12","02:17:06",,"ColdFusion started" "Information","main","11/27/12","02:34:19",,"ColdFusion started" "Information","main","11/27/12","03:01:20",,"ColdFusion started" "Information","main","11/27/12","05:25:59",,"ColdFusion started" "Information","main","11/27/12","06:30:48",,"ColdFusion started" "Information","main","11/27/12","06:36:20",,"ColdFusion started" "Information","main","11/27/12","09:34:07",,"ColdFusion started" "Information","main","11/27/12","09:35:39",,"ColdFusion started" "Information","main","11/27/12","09:36:41",,"ColdFusion started" "Information","main","11/27/12","09:39:15",,"ColdFusion started" "Information","main","11/27/12","09:40:42",,"ColdFusion started" "Information","main","11/27/12","09:42:55",,"ColdFusion started" "Information","main","11/27/12","09:44:23",,"ColdFusion started" "Information","main","11/27/12","09:46:18",,"ColdFusion started" "Information","main","11/27/12","09:47:35",,"ColdFusion started" "Information","main","11/27/12","09:48:53",,"ColdFusion started" "Information","main","11/27/12","09:50:04",,"ColdFusion started" "Information","main","11/27/12","09:51:51",,"ColdFusion started" "Information","main","11/27/12","09:53:05",,"ColdFusion started" "Information","main","11/27/12","09:54:24",,"ColdFusion started" "Information","main","11/27/12","09:55:28",,"ColdFusion started" "Information","main","11/27/12","09:56:38",,"ColdFusion started" "Information","main","11/27/12","09:58:03",,"ColdFusion started" "Information","main","11/27/12","09:59:03",,"ColdFusion started" "Information","main","11/27/12","10:04:37",,"ColdFusion started" "Information","main","11/27/12","12:04:02",,"ColdFusion started" I've been looking at the live server metrics in FR on a second screen all day, the CPU, Memory and requests all seemed fine about 12 midday, then the server rebooted. Looking at the logs for the hour between 9am and 10am (more than 15 restarts in the hour), the CPU never went over 44% usage and the Memory never exceeded 53% usage - in the recorded stats at least. There is no JDBC tracking at the moment, so I'll add that to tracking and see if it's MySQL causing a problem, but can anyone help me narrow down the problem, what would cause Cold Fusion to auto restart, and I'm assuming the auto restart is only happening because Fusion Reactor is installed. It's a Red Hat 5 LAMP stack running Coldfusion 9 and Fusion Reactor 4.5.2

    Read the article

  • SQL SERVER – How to Ignore Columnstore Index Usage in Query

    - by pinaldave
    Earlier I wrote about SQL SERVER – Fundamentals of Columnstore Index and very first question I received in email was as following. “We are using SQL Server 2012 CTP3 and so far so good. In our data warehouse solution we have created 1 non-clustered columnstore index on our large fact table. We have very unique situation but your article did not cover it. We are running few queries on our fact table which is working very efficiently but there is one query which earlier was running very fine but after creating this non-clustered columnstore index this query is running very slow. We dropped the columnstore index and suddenly this one query is running fast but other queries which were benefited by this columnstore index it is running slow. Any workaround in this situation?” In summary the question in simple words “How can we ignore using columnstore index in selective queries?” Very interesting question – you can use I can understand there may be the cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index. Here is the quick script to prove the same. We will first create sample database and then create columnstore index on the same. Once columnstore index is created we will write simple query. This query will use columnstore index. We will then show the usage of the query hint. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO Now we have created columnstore index so if we run following query it will use for sure the same index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO We can specify Query Hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX as described in following query and it will not use columnstore index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) GO Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO Again, make sure that you use hint sparingly and understanding the proper implication of the same. Make sure that you test it with and without hint and select the best option after review of your administrator. Here is the question for you – have you started to use SQL Server 2012 for your validation and development (not on production)? It will be interesting to know the answer. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >