Search Results

Search found 319 results on 13 pages for 'nicholas hill'.

Page 1/13 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Hill International Wins Oracle Eco-Enterprise Innovation Award

    - by Evelyn Neumayr
    In my last blog entry, I discussed Oracle’s Eco-Enterprise Innovation Award, part of the Oracle Excellence awards. Nominations for this year’s awards are due July 17. These awards are presented to organizations that use Oracle products to reduce their environmental footprint while improving their operational efficiency. One of last year’s winners was Hill International. Engineering News-Record magazine recently ranked Hill as the eighth-largest construction management firm in the United States. Hill International was able to streamline its forecasting and improve its visibility into its construction projects’ productivity and profitability using Oracle Primavera. They also implemented Oracle Hyperion Financial Management to standardize its financial reporting and forecasting processes and support its decision-making. With Oracle, Hill gained visibility into the true productivity of each project and cut its financial reporting cycle time from two weeks to one. The company also used the data generated to support new construction project proposals and determine the profitability of potential projects. Hill International realized significant cost savings and reduced its environmental impact on its US$400 million Comcast Center construction project in Philadelphia by centralizing its data storage, reducing paper usage, and maximizing project efficiency. It also leveraged the increased visibility offered by the Oracle solutions to make more environmentally-sound business decisions regarding on-site demolition, re-use of previous structures, green design of new facilities, procurement, and materials usage. See more about Hill International and the other Eco-Enterprise Innovation award winners here.  

    Read the article

  • Why hill climbing is called anytime algorithm?

    - by crucified soul
    From wikipedia, Anytime algorithm In computer science an anytime algorithm is an algorithm that can return a valid solution to a problem even if it's interrupted at any time before it ends. The algorithm is expected to find better and better solutions the more time it keeps running. Hill climbing Hill climbing can often produce a better result than other algorithms when the amount of time available to perform a search is limited, such as with real-time systems. It is an anytime algorithm: it can return a valid solution even if it's interrupted at any time before it ends. Hill climbing algorithm can stuck into local optima or ridge, after that even if it runs infinite time, the result won't be any better. Then, why hill climbing is called anytime algorithm?

    Read the article

  • Awesome Back to the Future – Hill Valley Mod for Grand Theft Auto IV [Video]

    - by Asian Angel
    What could be better than playing a good round of Grand Theft Auto IV? Playing with a working Delorean time machine with Marty McFly as the driver! Watch as this Delorean tears up the roads in this video from YouTube user Seedyrom34. You can read more about the mod at the YouTube link provided below… Grand Theft Auto IV: Hill Valley – [Back to the Future Mod Showcase] [via Neatorama] HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • Artificial Inteligence library in python

    - by João Portela
    I was wondering if there are any python AI libraries similar to aima-python but for a more recent version of python... and how they are in comparison to aima-python. I was particularly interested in search algorithms such as hill-climbing, simulated annealing, tabu search and genetic algorithms. edit: made the question more clear.

    Read the article

  • Travelling Salesman Problem

    - by Arjun Vasudevan
    I'm trying to solve the travelling salesman problem using the following algorithms - DFS, Hill Climbing and A*. I could write up a code for solving it using DFS. Can I have some help in solving it using the other 2 algorithms? I searched for it a lot, on the web.

    Read the article

  • How do I implement collision detection with a sprite walking up a rocky-terrain hill?

    - by detectivecalcite
    I'm working in SDL and have bounding rectangles for collisions set up for each frame of the sprite's animation. However, I recently stumbled upon the issue of putting together collisions for characters walking up and down hills/slopes with irregularly curved or rocky terrain - what's a good way to do collisions for that type of situation? Per-pixel? Loading up the points of the incline and doing player-line collision checking? Should I use bounding rectangles in general or circle collision detection?

    Read the article

  • Sign of a Good Game

    - by Matt Christian
    (Warning: This post contains spoilers about SILENT HILL 2.  If you haven't played this game, you are dumb) What is one sign of a great game? One of the signs I realized recently is when a game continues to stun and surprise you years and years after you've played and beaten it.  As a major Silent Hill fan, I recently was reminded of Silent Hill 2 and even though see it as one of my favorite Silent Hill games, there are still things I'm learning about it that are neat little additions that add to the atmosphere (atmosphere also makes a great game!). For instance, when you start the game you are given a letter by your wife who has been deceased for years and years.  You are directed to Silent Hill and start treking through hell all by your lonesome (with the exception of a few psychos).  As you continue through the game, pieces of the letter begin to fade and disappear until eventually it is completely non-existent, thus implying the letter was never real and the letter was a delusion you created. Another example is the game's use of imagery the player knows about but might not notice at first.  For me, the most apparent of these was the dress you find near the start when you find the flashlight, which is the same dress you see Mary (your wife) wearing in the flashback sequences.  However, one thing I didn't know was that several deceased bodies you encounter laying around Silent Hill are actually the body of the main character (James) which invokes an idea you've seen that body before but can't pinpoint where... It's amazing to see a game go to such unique lengths to provide a psychological horror game.  Sure, all the dead bodies could be randomly modelled and the dress could be any ol' dress, but just the idea of your brain knowing something deep down but you can't pinpoint it is a really unique idea.  In my opinion, it ties less into subconscious and more into natural tendencies, it taps into the fear hidden inside us all.

    Read the article

  • XRDP errors when trying to use sesman-x11rdp

    - by Nicholas
    I've just installed Ubuntu 11.10 Desktop on an old laptop of mine and I wanted to set it up so I could remote into it from my windows desktop. I've installed XRDP, but when I attempt to login using sesman-x11rdp it logs in, then the window shuts down. I've checked over the logs and here is what I get at the time of login: [20120123-16:49:23] [INFO ] scp thread on sck 8 started successfully [20120123-16:49:23] [INFO ] granted TS access to user nicholas [20120123-16:49:24] [INFO ] starting X11rdp session... [20120123-16:49:24] [CORE ] error starting X server - user nicholas - pid 3869 [20120123-16:49:24] [DEBUG] errno: 2, description: No such file or directory [20120123-16:49:24] [DEBUG] execve parameter list: 11 [20120123-16:49:24] [DEBUG] argv[0] = X11rdp [20120123-16:49:24] [DEBUG] argv[1] = :11 [20120123-16:49:24] [DEBUG] argv[2] = -geometry [20120123-16:49:24] [DEBUG] argv[3] = 1280x720 [20120123-16:49:24] [DEBUG] argv[4] = -depth [20120123-16:49:24] [DEBUG] argv[5] = 16 [20120123-16:49:24] [DEBUG] argv[6] = -bs [20120123-16:49:24] [DEBUG] argv[7] = -ac [20120123-16:49:24] [DEBUG] argv[8] = -nolisten [20120123-16:49:24] [DEBUG] argv[9] = tcp [20120123-16:49:25] [DEBUG] argv[10] = (null) [20120123-16:49:34] [ERROR] X server for display 11 startup timeout [20120123-16:49:34] [ERROR] X server for display 11 startup timeout [20120123-16:49:34] [INFO ] starting xrdp-sessvc - xpid=3869 - wmpid=3868 [20120123-16:49:34] [ERROR] another Xserver is already active on display 11 [20120123-16:49:34] [DEBUG] aborting connection... [20120123-16:49:34] [INFO ] session 3867 - user nicholas - terminated Can anyone point me to the proper way to get this working with x11rdp?

    Read the article

  • cannot access ubuntu 12.04 SAMBA share from windows 7 using hostname

    - by user98398
    I've been trying for days to get this working, and everywhere I look online it seems no one has a definitive answer, so here is the run down: I have an external drive attached to my ubuntu 12.04 machine, "nicholas-desktop." I have the entire drive shared over the network via SAMBA. If I try to access the drive from windows 7 by using "\nicholas-desktop" it fails saying it cannot locate "nicholas-desktop." However, if I use the current IP address assigned to my machine by my router's DHCP server by typing "\192.168.2.XXX" I have no problems accessing the share. if I try to ping my ubuntu machine's hostname from windows it fails. The same happens if I try to ping my windows machine, "nicholas-laptop" from my ubuntu machine. Again, if I use either machine's assigned IP address it works fine. Can someone please help me get this working? I don't want any workarounds like setting a static IP, or DHCP reservation, I want to be able to resolve hostnames from both sides. I have tried enabling SAMBA'a WINS server so I could resolve the hostnames using netBIOS, however that didn't work either, I may have made a mistake setting it up though. Thanks for your time, NCB

    Read the article

  • Memory efficient import many data files into panda DataFrame in Python

    - by richardh
    I import into a panda DataFrame a directory of |-delimited.dat files. The following code works, but I eventually run out of RAM with a MemoryError:. import pandas as pd import glob temp = [] dataDir = 'C:/users/richard/research/data/edgar/masterfiles' for dataFile in glob.glob(dataDir + '/master_*.dat'): print dataFile temp.append(pd.read_table(dataFile, delimiter='|', header=0)) masterAll = pd.concat(temp) Is there a more memory efficient approach? Or should I go whole hog to a database? (I will move to a database eventually, but I am baby stepping my move to pandas.) Thanks! FWIW, here is the head of an example .dat file: cik|cname|ftype|date|fileloc 1000032|BINCH JAMES G|4|2011-03-08|edgar/data/1000032/0001181431-11-016512.txt 1000045|NICHOLAS FINANCIAL INC|10-Q|2011-02-11|edgar/data/1000045/0001193125-11-031933.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-11|edgar/data/1000045/0001193125-11-005531.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-27|edgar/data/1000045/0001193125-11-015631.txt 1000045|NICHOLAS FINANCIAL INC|SC 13G/A|2011-02-14|edgar/data/1000045/0000929638-11-00151.txt

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • Where can I find WebSphere configuration files?

    - by Nicholas Key
    Hi there, I would like to know where are the WebSphere configuration details saved? Specifically, configuration details that are shown in the Administrative Console (from the web) or from the console using wsadmin. Some of the examples would be: Java and Process Management: Class loader, Process definition, Process execution Container Settings: Session management, SIP Container Settings, Web Container Settings, Portlet Container Settings Are there XML files that persist these configuration details? Nicholas

    Read the article

  • how to escape white space in bash loop list

    - by MCS
    I have a bash shell script that loops through all child directories (but not files) of a certain directory. The problem is that some of the directory names contain spaces. Here are the contents of my test directory: $ls -F test Baltimore/ Cherry Hill/ Edison/ New York City/ Philadelphia/ cities.txt And the code that loops through the directories: for f in `find test/* -type d`; do echo $f done Here's the output: test/Baltimore test/Cherry Hill test/Edison test/New York City test/Philadelphia Cherry Hill and New York City are treated as 2 or 3 separate entries. I tried quoting the filenames, like so: for f in `find test/* -type d | sed -e 's/^/\"/' | sed -e 's/$/\"/'`; do echo $f done but to no avail. There's got to be a simple way to do this. Any ideas? The answers below are great. But to make this more complicated - I don't always want to use the directories listed in my test directory. Sometimes I want to pass in the directory names as command-line parameters instead. I took Charles' suggestion of setting the IFS and came up with the following: dirlist="${@}" ( [[ -z "$dirlist" ]] && dirlist=`find test -mindepth 1 -type d` && IFS=$'\n' for d in $dirlist; do echo $d done ) and this works just fine unless there are spaces in the command line arguments (even if those arguments are quoted). For example, calling the script like this: test.sh "Cherry Hill" "New York City" produces the following output: Cherry Hill New York City Again, I know there must be a way to do this - I just don't know what it is...

    Read the article

  • Unable to jstat WebSphere Application Server PID

    - by Nicholas Key
    Hi stackoverflow'ers, I've spent the entire day trying to find relevant resources about doing jstat on the WebSphere Process ID. I have WebSphere Application Server 7.0 installed on Windows 2003. I did this command: jstat -gcutil [PID] 1000 But I kept getting "[PID] not found" message. Any idea how to resolve this issue? Or Java's jstat utility does not probe into IBM's derivative JVM? Nicholas

    Read the article

  • JVM ID not found

    - by Nicholas Key
    Hi all, I've recently downloaded and installed WebSphere Application Server 7.0 on Windows 2003. I wanted to do a jstat (JDK 1.6) to probe the JVM but I kept getting " not found" message. Any idea why this is happening? Nicholas

    Read the article

  • geocode webservice address parameter written in another language

    - by nicholas
    Dear fellow Programmers, I try to use the following google map webservice in order to locate greek addresses: http://maps.google.com/maps/api/geocode/json?address=??ad?µ?a? 16&sensor=false and it does not work. If I hit the same exactly address but written with latin alphabet characters: maps.google.com/maps/api/geocode/json?address=akadimias 16&sensor=false, it works and returns the right result. Could somebody help with this? (To use this service with greek letters as language parameter) Thank you in advance, Nicholas

    Read the article

  • How to post something to Facebook from Android?

    - by Nicholas Key
    Hi stackoverflow'ers, Quick questions here: What are the necessary tools/APIs to post something from Android to Facebook? I looked at this URL http://wiki.developers.facebook.com/index.php/User%3AAndroid and it mentioned a few suggestions. Have anyone tried fbrocket? What is Facebook Connect anyway? Thanks! Nicholas

    Read the article

  • Where can I find WebSphere configuration files?

    - by Nicholas Key
    Hello Stackoverflow'ers, I would like to know where are the WebSphere configuration details saved? Specifically, configuration details that are shown in the Administrative Console (from the web) or from the console using wsadmin. Some of the examples would be: Java and Process Management: Class loader, Process definition, Process execution Container Settings: Session management, SIP Container Settings, Web Container Settings, Portlet Container Settings Are there XML files that persist these configuration details? Nicholas

    Read the article

  • no sms deleted folder?

    - by Nicholas
    Hi, I'm new to Android and trying to convert a Windows Mobile app to java/android. In WM there are the followning standard folders: Inbox,Sent, dratfs, outbox, deleted. I'm able to access all folders except deleted with Uri.parse("content://sms/XXXXXX") 1) Is deleted folder missing? 2) Is it possible to create user folders, like "content://sms/My Folder" Thanks, Nicholas

    Read the article

  • Connecting PC via Bluetooth SNP

    - by Nicholas
    Hi, I have a widcomm example BluChat "WIDCOMM SDK RFComm Service" running on my PC with an USB dongle (BT-2400P). I would like to connect this chat from my HTC Desire. So I stared with the Java example http://developer.android.com/resources/samples/BluetoothChat/index.html. This is also using BT RFComm. If I'm using my HTC as a server it work's fine, but I would like to use my PC as a server. Then .connect() comes back with "Service Discovery Failure". I have modified the UUID string in the Java example to match the PC-application mmSocket = mmDevice.createRfcommSocketToServiceRecord(UUID.fromString("5fc2a42e-144e-4bb5-b43f-4e61711d1c32")); mmSocket.connect(); What is missing? Any help appreciated. Nicholas

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 2 (sys.dm_exec_sessions)

    - by Tamarick Hill
      This sys.dm_exec_sessions DMV is another Server-Scoped DMV which returns information for each authenticated session that is running on your SQL Server box. Lets take a look at some of the information that this DMV returns. SELECT * FROM sys.dm_exec_sessions This DMV is very similar to the DMV we reviewed yesterday, sys.dm_exec_requests, and returns some of the same information such as reads, writes, and status for a given session_id (SPID). But this DMV returns additional information such as the Host name of the machine that owns the SPID, the program that is being used to connect to SQL Server, and the Client interface name. In addition to this information, this DMV also provides useful information on session level settings that may be on or off such as quoted identifier, arithabort, ansi padding, ansi nulls, etc. This DMV will also provide information about what specific isolation level the session is executing under and if the default deadlock priority for your SPID has been changed from the default. Lastly, this DMV provides you with an Original Login Name, which comes in handy whenever you have some type of context switching taking place due to an ‘EXECUTE AS’ statement being used and you need to identify the original login that started a session. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms176013.aspx

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 22 (sys.dm_db_index_physical_stats)

    - by Tamarick Hill
    The sys.dm_db_index_physical_stats Dynamic Management Function is used to return information about the fragmentation levels, page counts, depth, number of levels, record counts, etc. about the indexes on your database instance. One row is returned for each level in a given index, which we will discuss more later. The function takes a total of 5 input parameters which are (1) database_id, (2) object_id, (3) index_id, (4) partition_number, and (5) the mode of the scan level that you would like to run. Let’s use this function with our AdventureWorks2012 database to better illustrate the information it provides. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, NULL) As you can see from the result set, there is a lot of beneficial information returned from this DMF. The first couple of columns in the result set (database_id, object_id, index_id, partition_number, index_type_desc, alloc_unit_type_desc) are either self-explanatory or have been explained in our previous blog sessions so I will not go into detail about these at this time. The next column in the result set is the index_depth which represents how deep the index goes. For example, If we have a large index that contains 1 root page, 3 intermediate levels, and 1 leaf level, our index depth would be 5. The next column is the index_level which refers to what level (of the depth) a particular row is referring to. Next is probably one of the most beneficial columns in this result set, which is the avg_fragmentation_in_percent. This column shows you how fragmented a particular level of an index may be. Many people use this column within their index maintenance jobs to dynamically determine whether they should do REORG’s or full REBUILD’s of a given index. The fragment count represents the number of fragments in a leaf level while the avg_fragment_size_in_pages represents the number of pages in a fragment. The page_count column tells you how many pages are in a particular index level. From my result set above, you see the the remaining columns all have NULL values. This is because I did not specify a ‘mode’ in my query and as a result it used the ‘LIMITED’ mode by default. The LIMITED mode is meant to be lightweight so it does collect information for every column in the result set. I will re-run my query again using the ‘DETAILED’ mode and you will see we now have results for these rows. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, ‘DETAILED’)   From the remaining columns, you see we get even more detailed information such as how many records are in a particular index level (record_count). We have a column for ghost_record_count which represents the number of records that have been marked for deletion, but have not physically been removed by the background ghost cleanup process. We later see information on the MIN, MAX, and AVG record size in bytes. The forwarded_record_count column refers to records that have been updated and now no longer fit within the row on the page anymore and thus have to be moved. A forwarded record is left in the original location with a pointer to the new location. The last column in the result set is the compressed_page_count column which tells you how many pages in your index have been compressed. This is a very powerful DMF that returns good information about the current indexes in your system. However, based on the mode you select, it could be a very resource intensive function so be careful with how you use it. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188917.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >