Search Results

Search found 6107 results on 245 pages for 'reserved words'.

Page 57/245 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • Creating a Synchronous BPEL composite using File Adapter

    - by [email protected]
    By default, the JDeveloper wizard generates asynchronous WSDLs when you use technology adapters. Typically, a user follows these steps when creating an adapter scenario in 11g: 1) Create a SOA Application with either "Composite with BPEL" or an "Empty Composite". Furthermore, if  the user chooses "Empty Composite", then he or she is required to drop the "BPEL Process" from the "Service Components" pane onto the SOA Composite Editor. Either way, the user comes to the screen below where he/she fills in the process details. Please note that the user is required to choose "Define Service Later" as the template. 2) Creates the inbound service and outbound references and wires them with the BPEL component:     3) And, finally creates the BPEL process with the initiating <receive> activity to retrieve the payload and an <invoke> activity to write the payload.     This is how most BPEL processes that use Adapters are modeled. And, if we scrutinize the generated WSDL, we can clearly see that the generated WSDL is one way and that makes the BPEL process asynchronous (see below)   In other words, the inbound FileAdapter would poll for files in the directory and for every file that it finds there, it would translate the content into XML and publish to BPEL. But, since the BPEL process is asynchronous, the adapter would return immediately after the publish and perform the required post processing e.g. deletion/archival and so on.  The disadvantage with such asynchronous BPEL processes is that it becomes difficult to throttle the inbound adapter. In otherwords, the inbound adapter would keep sending messages to BPEL without waiting for the downstream business processes to complete. This might lead to several issues including higher memory usage, CPU usage and so on. In order to alleviate these problems, we will manually tweak the WSDL and BPEL artifacts into synchronous processes. Once we have synchronous BPEL processes, the inbound adapter would automatically throttle itself since the adapter would be forced to wait for the downstream process to complete with a <reply> before processing the next file or message and so on. Please see the tweaked WSDL below and please note that we have converted the one-way to a two-way WSDL and thereby making the WSDL synchronous: Add a <reply> activity to the inbound adapter partnerlink at the end of your BPEL process e.g.   Finally, your process will look like this:   You are done.   Please remember that such an excercise is NOT required for Mediator since the Mediator routing rules are sequential by default. In other words, the Mediator uses the caller thread (inbound file adapter thread) for processing the routing rules. This is the case even if the WSDL for mediator is one-way.

    Read the article

  • What Counts For a DBA: Ego

    - by Louis Davidson
    Leaving aside, for a second, Freud’s psychoanalytical definitions, the term “ego” generally refers to a person’s sense of self, and their self-esteem. In casual usage, however, it usually appears in the adjectival form, “egotistical” (most often followed by “jerk”). You don’t need to be a jerk to be a DBA; humility is important. However, ego is important too. A good DBA needs a certain degree of self-esteem…a belief and pride in what he or she can do better than anyone else can. The ideal DBA needs to be humble enough to admit when they are wrong but egotistical enough to know when they are right, and to stand up for that knowledge and make their voice heard. In most organizations, the DBA team is seriously outnumbered by headstrong developers and clock driven managers, and “great” DBAs will often be outnumbered by…well…the not so great. In order to be heard in this environment, a DBA will not only need to be very skilled, but will also need a healthy dose of ego. As Freud might have put it, the unconscious desire of the DBA (the id) is for iron-fist control over their databases, and code that runs in them. However, the ego moderates this desire, seeking to “satisfy the id in realistic ways that, in the long term, bring benefit rather than grief“. In other words, the ego understands the need to exert a measure of control and self-belief, but also to tolerate and play nicely with developers and other DBAs. The trick, naturally, is learning how to be heard when it is important, but also to make everyone around you welcome that input, even when you have to be bold to make the “I know what I am talking about, and you…well…not so much” decisions. Consider a baseball team, bottom of the ninth inning of the championship game, man on first and down one run. Almost anyone on that team will have the ability to hit a home run, but only one or two will have the iron belief that they can pull it off in this critical, end-game situation. The player you need in this situation is the one who has passionately gone the extra mile preparing for just this moment, is bursting at the seams with self-confidence, and can look the coach in the eye and state, boldly, “Put me in, I am your best bet“. Likewise, on those occasions when high customer demand coincides with copious system errors, and panic is bubbling just beneath the surface, you don’t need the minimally qualified support person, armed with the “reboot and hope” technique (though that sometimes works!). You need the DBA who steps up and says, “Put me in” and has the skill and tenacity to back up those words and to fix the pinpoint and fix the problem, whatever it takes, while keeping customers and managers happy. Of course, the egotistical DBA will happily spend hours telling you how great they are at their job, and how brilliantly they put out a previous fire, and this is no guarantee that they can deliver. However, if an otherwise-humble DBA looks you in the eye and says, “I can do it”, then hear them out. Sometimes, this burst of ego will be exactly what’s required.

    Read the article

  • How to move complete SharePoint Server 2007 from one box to another

    - by DipeshBhanani
    It was time of my first onsite client assignment on SharePoint. Client had one server production environment. They wanted to upgrade the topology with completely new SharePoint Farm of three servers. So, the task was to move whole MOSS 2007 stuff to the new server environment without impacting data. The last three scary words “… without impacting data…” were actually putting pressure on my head. Moreover SSP was required to move because additional information has been added for users apart from AD import.   I thought I had to do only backup and restore. It appeared pretty easy at first thought. Just because of these damn scary words, I thought to check out on internet for guidance related to this scenario. I couldn’t get anything except general guidance of moving server on Microsoft TechNet site. I promised myself for starting blogs with this post if I would be successful in this task. Well, I took long time to write this but finally made it. I hope it will be useful to all guys looking for SharePoint server movement.   Before beginning restoration, make sure that, there is no difference in versions of SharePoint at source and destination server. Also check whether the state of SharePoint Installation at the time of backup and restore is same or not. (E.g. SharePoint related service packs and patches if any)   The main tasks of the server movement are as follow:   Backup all the databases Install and configure SharePoint on new environment Deploy all solution (WSP Files) globally to destination server- for installing features attached to the solutions Install all the custom features Deploy/Copy custom pages/files which are added to the “12Hive” folder later Restore SSP Restore My Site Restore other web application   Tasks 3 to 5 are for making sure that we have configured the environment well enough for the web application to be restored successfully. The main and complex task was restoring SSP. I have started restoring SSP through Central Admin. After a while, the restoration status was updated to “unsuccessful”. “Damn it, what went wrong?” I thought looking at the error detail down the page. I couldn’t remember the error message but I had corrected and restored it again.   Actually once you fail restoring SSP, until and unless you don’t clean all related stuff well, your restoration will be failed again and again. I wanted to find the actual reason. So cleaned, restored, cleaned, restored… I had tried almost 5-6 times and finally, I succeeded. I had realized how pleasant it is, to see the word “Successful” on the screen. Without wasting your much time to read, let me write all the detailed steps of restoring SSP:   Delete the SSP through following STSADM command. stsadm -o deletessp -title <SSP name> -deletedatabases -force e.g.: stsadm -o deletessp -title SharedServices1 -deletedatabases –force Check and delete the web application associated with SSP if it exists. Remove Link from Check and remove “Alternate Access Mapping” associated with SSP if it exists. Check and delete IIS site as well as application pool associated with SSP if it exists. Stop following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Delete all the databases associated/related to SSP from SQL Server. Reset IIS. Start again following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Restore the new SSP.   After the SSP restoration, all other stuffs had completed very smoothly without any more issues. I did few modifications to sites for change of server name and finally, the new environment was ready.

    Read the article

  • SQL SERVER – Select the Most Optimal Backup Methods for Server

    - by pinaldave
    Backup and Restore are very interesting concepts and one should be very much with the concept if you are dealing with production database. One never knows when a natural disaster or user error will surface and the first thing everybody wants is to get back on point in time when things were all fine. Well, in this article I have attempted to answer a few of the common questions related to Backup methodology. How to Select a SQL Server Backup Type In order to select a proper SQL Server backup type, a SQL Server administrator needs to understand the difference between the major backup types clearly. Since a picture is worth a thousand words, let me offer it to you below. Select a Recovery Model First The very first question that you should ask yourself is: Can I afford to lose at least a little (15 min, 1 hour, 1 day) worth of data? Resist the temptation to save it all as it comes with the overhead – majority of businesses outside finances can actually afford to lose a bit of data. If your answer is YES, I can afford to lose some data – select a SIMPLE (default) recovery model in the properties of your database, otherwise you need to select a FULL recovery model. The additional advantage of the Full recovery model is that it allows you to restore the data to a specific point in time vs to only last backup time in the Simple recovery model, but it exceeds the scope of this article Backups in SIMPLE Recovery Model In SIMPLE recovery model you can select to do just Full backups or Full + Differential. Full Backup This is the simplest type of backup that contains all information needed to restore the database and should be your first choice. It is often sufficient for small databases, but note that it makes a big impact on the performance of your database Full + Differential Backup After Full, Differential backup picks up all of the changes since the last Full backup. This means if you made Full, Diff, Diff backup – the last Diff backup contains all of the changes and you don’t need the previous Differential backup. Differential backup is obviously smaller and carries less performance overhead Backups in FULL Recovery Model In FULL recovery model you can select Full + Transaction Log or Full + Differential + Transaction Log backup. You have to create Transaction Log backup, because at that time the log is being truncated. Otherwise your Transaction Log will grow uncontrollably. Full + Transaction Log Backup You would always need to perform a Full backup first. Then a series of Transaction log backup. Note that (in contrast to Differential) you need ALL transactions to log since the last Full of Diff backup to properly restore. Transaction log backups have the smallest performance overhead and can be performed often. Full + Differential + Transaction Log Backup If you want to ease the performance overhead on your server, you can replace some of the Full backup in the previous scenario with Differential. You restore scenario would start from Full, then the Last Differential, then all of the remaining transactions log backups Typical backup Scenarios You may say “Well, it is all nice – give me the examples now”. As you may already know, my favorite SQL backup software is SQLBackupAndFTP. If you go to Advanced Backup Schedule form in this program and click “Load a typical backup plan…” link, it will give you these scenarios that I think are quite common – see the image below. The Simplest Way to Schedule SQL Backups I hate to repeat myself, but backup scheduling in SQL agent leaves a lot to be desired. I do not know the simple way to schedule your SQL server backups than in SQLBackupAndFTP – see the image below. The whole backup scheduling with compression, encryption and upload to a Network Folder / HDD / NAS Drive / FTP / Dropbox / Google Drive / Amazon S3 takes just a few minutes – see my previous post for the review. Final Words This post offered an explanation for major backup types only. For more complicated scenarios or to research other options as usually go to MSDN. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Project Euler 17: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 17.  As always, any feedback is welcome. # Euler 17 # http://projecteuler.net/index.php?section=problems&id=17 # If the numbers 1 to 5 are written out in words: # one, two, three, four, five, then there are # 3 + 3 + 5 + 4 + 4 = 19 letters used in total. # If all the numbers from 1 to 1000 (one thousand) # inclusive were written out in words, how many letters # would be used? # # NOTE: Do not count spaces or hyphens. For example, 342 # (three hundred and forty-two) contains 23 letters and # 115 (one hundred and fifteen) contains 20 letters. The # use of "and" when writing out numbers is in compliance # with British usage. import time start = time.time() def to_word(n): h = { 1 : "one", 2 : "two", 3 : "three", 4 : "four", 5 : "five", 6 : "six", 7 : "seven", 8 : "eight", 9 : "nine", 10 : "ten", 11 : "eleven", 12 : "twelve", 13 : "thirteen", 14 : "fourteen", 15 : "fifteen", 16 : "sixteen", 17 : "seventeen", 18 : "eighteen", 19 : "nineteen", 20 : "twenty", 30 : "thirty", 40 : "forty", 50 : "fifty", 60 : "sixty", 70 : "seventy", 80 : "eighty", 90 : "ninety", 100 : "hundred", 1000 : "thousand" } word = "" # Reverse the numbers so position (ones, tens, # hundreds,...) can be easily determined a = [int(x) for x in str(n)[::-1]] # Thousands position if (len(a) == 4 and a[3] != 0): # This can only be one thousand based # on the problem/method constraints word = h[a[3]] + " thousand " # Hundreds position if (len(a) >= 3 and a[2] != 0): word += h[a[2]] + " hundred" # Add "and" string if the tens or ones # position is occupied with a non-zero value. # Note: routine is broken up this way for [my] clarity. if (len(a) >= 2 and a[1] != 0): # catch 10 - 99 word += " and" elif len(a) >= 1 and a[0] != 0: # catch 1 - 9 word += " and" # Tens and ones position tens_position_value = 99 if (len(a) >= 2 and a[1] != 0): # Calculate the tens position value per the # first and second element in array # e.g. (8 * 10) + 1 = 81 tens_position_value = int(a[1]) * 10 + a[0] if tens_position_value <= 20: # If the tens position value is 20 or less # there's an entry in the hash. Use it and there's # no need to consider the ones position word += " " + h[tens_position_value] else: # Determine the tens position word by # dividing by 10 first. E.g. 8 * 10 = h[80] # We will pick up the ones position word later in # the next part of the routine word += " " + h[(a[1] * 10)] if (len(a) >= 1 and a[0] != 0 and tens_position_value > 20): # Deal with ones position where tens position is # greater than 20 or we have a single digit number word += " " + h[a[0]] # Trim the empty spaces off both ends of the string return word.replace(" ","") def to_word_length(n): return len(to_word(n)) print sum([to_word_length(i) for i in xrange(1,1001)]) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Using rel=next and rel=prev with multiple sets of paginated content on the same page

    - by jakejgordon
    We are running into issues with trying to figure out how to implement rel="next" and rel="prev" -- coupled with rel="canonical" -- with multiple sets of paginated content on the same page, with pages in multiple cultures. In other words, how do we implement these when we have a pager for both Product Reviews and Questions and Answers (aka "Q&A") on the same page, with duplicate content across culture-specific URLs (e.g. /us/en/my-product vs. /ca/en/my-product)? Our current implementation will actually do a full postback when you click Page 2, and will add something to the query string (e.g. website.com/ca/en/my-product?previewpage=2 or website.com/ca/en/my-product?questionpage=2). If we only had one set of paginated content then the implementation would certainly be more straightforward. Adding a second set of paginated content (i.e. Q&A) complicates things. Let's assume that we want the United States English page to be the canonical target (i.e. /us/en/my-product) based on culture. If you go to the /ca/en/my-product page you'll have a rel="canonical" href="/us/en/my-product". So far so good. Let's also assume that we are not implementing a page that lists ALL Product Reviews and Q&A. This would likely solve a number of our problems by using rel="canonical" to this page, but is not an option for reasons that are out of scope for this discussion. Now if you click on page 2 of Product Reviews, it will reload the page with /ca/en/my-product?reviewpage=2 as the URL. Given this scenario, here are my questions: On page 2 of the my-product page on the Canadian site, should there be a rel="canonical" to /us/en/my-product?reviewpage=2 (assuming the content is identical in the United States and Canada)? Should the rel="prev" go to /ca/en/my-product?reviewpage=1 or should it go to /ca/en/my-product ? The query-string version would really only be accessible if using the pager and shows the exact same content as the base page. The following two questions are closely related to this one. Should the /ca/en/my-product?reviewpage=1 have a rel canonical directly to /us/en/my-product (United States page with nothing in query string) since the content is identical)? Given that Q&A content is also paginated, should there be a rel="next" on the base page without query string? In other words, should the /ca/en/my-product page have a rel="next" to /ca/en/my-product?reviewpage=2 AND rel="next" to /ca/en/my-product?questionpage=2 . So far as I can tell it doesn't make sense to have multiple rel="next" implementations on the same page. I suspect that the pages with query string values should have rel="next" and rel="prev" that only point to other pages with query strings and not to the base page. The ?reviewpage=1 and ?questionpage=1 pages would then just have a rel="canonical" to /us/en/my-product . Thoughts? I know this is a tough one -- that's why I brought it to this community. Thanks so much for your help in advance!

    Read the article

  • Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Pig and Pig Latin in Big Data Story. In this article we will understand what is Sqoop and Zookeeper in Big Data Story. There are two most important components one should learn when learning about interacting with Hadoop – Sqoop and Zookper. What is Sqoop? Most of the business stores their data in RDBMS as well as other data warehouse solutions. They need a way to move data to the Hadoop system to do various processing and return it back to RDBMS from Hadoop system. The data movement can happen in real time or at various intervals in bulk. We need a tool which can help us move this data from SQL to Hadoop and from Hadoop to SQL. Sqoop (SQL to Hadoop) is such a tool which extract data from non-Hadoop data sources and transform them into the format which Hadoop can use it and later it loads them into HDFS. Essentially it is ETL tool where it Extracts, Transform and Load from SQL to Hadoop. The best part is that it also does extract data from Hadoop and loads them to Non-SQL (or RDBMS) data stores. Essentially, Sqoop is a command line tool which does SQL to Hadoop and Hadoop to SQL. It is a command line interpreter. It creates MapReduce job behinds the scene to import data from an external database to HDFS. It is very effective and easy to learn tool for nonprogrammers. What is Zookeeper? ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. In other words Zookeeper is a replicated synchronization service with eventual consistency. In simpler words – in Hadoop cluster there are many different nodes and one node is master. Let us assume that master node fails due to any reason. In this case, the role of the master node has to be transferred to a different node. The main role of the master node is managing the writers as that task requires persistence in order of writing. In this kind of scenario Zookeeper will assign new master node and make sure that Hadoop cluster performs without any glitch. Zookeeper is the Hadoop’s method of coordinating all the elements of these distributed systems. Here are few of the tasks which Zookeepr is responsible for. Zookeeper manages the entire workflow of starting and stopping various nodes in the Hadoop’s cluster. In Hadoop cluster when any processes need certain configuration to complete the task. Zookeeper makes sure that certain node gets necessary configuration consistently. In case of the master node fails, Zookeepr can assign new master node and make sure cluster works as expected. There many other tasks Zookeeper performance when it is about Hadoop cluster and communication. Basically without the help of Zookeeper it is not possible to design any new fault tolerant distributed application. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Big Data Analytics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • PASS summit 2013. We do not remember days. We remember moments.

    - by Maria Zakourdaev
      "Business or pleasure?" barked the security officer in the Charlotte International Airport. "I’m not sure, sir," I whimpered, immediately losing all courage. "I'm here for the database technologies summit called PASS”. "Sounds boring. Definitely a business trip." Boring?! He couldn’t have been more wrong. If he only knew about the countless meetings throughout the year where I waved my hands at my great boss and explained again and again how fantastic this summit is and how much I learned last year. One by one, the drops of water began eating away at the stone. He finally approved of my trip just to stop me from torturing him. Time moves as slow as a turtle when you are waiting for something. Time runs as fast as a cheetah when you are there. PASS has come...and passed. It’s been an amazing week. Enormous sqlenergy has filled the city, filled the convention center and the surrounding pubs and restaurants. There were awesome speakers, great content, and the chance to meet most inspiring database professionals from all over the world. Some sessions were unforgettable. Imagine a fully packed room with more than 500 people in awed silence, catching each and every one of Paul Randall's words. His tremendous energy and deep knowledge were truly thrilling. No words can describe Rob Farley's unique presentation style, captivating and engaging the audience. When the precious session minutes were over, I could tell that the many random puzzle pieces of information that his listeners knew had been suddenly combined into a clear, cohesive picture. I was amazed as always by Paul White's great sense of humor and his phenomenal ability to explain complicated concepts in a simple way. The keynote by the brilliant Dr. DeWitt from Microsoft in front of the full summit audience of 5000 deeply listening people was genuinely breathtaking. The entire conference throughout offered excellent speakers who inspired me to absorb the knowledge and use it when I got home. To my great surprise, I found that there are other people in this world who like replication as much I do. During the Birds of a Feather Luncheon, SQL Server MVP Ted Krueger was writing a script for replicating the food to other tables. I learned many things at PASS, and not all of them were about SQL. After three summits, this time I finally got the knack of networking. I actually went up and spoke to people, and believe me, that was not easy for an introvert. But this is what the summit is all about. Sqlpeople. They are the ones who make it such an exciting experience. I will be looking forward to the next year. Till then I have my notes and new ideas. How long was the summit? Thousands of unforgettable moments.

    Read the article

  • SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS)

    - by pinaldave
    Here is the email which made me write this blog post. When I write a blog post I write keeping in mind that if the developer is not familiar with the concept he will attempt this on the development server. If due to any reason you attempt it on any other server than your personal server, developer should make sure to have complete confidence on his own expertise and understand the risk behind it.  Well, let us read the email which I received. I have modified it a bit to remove information related to organizational and individual. “I just read your blog post on Beginning DQS. I went ahead and followed every single screenshot and it worked fine. I was able to execute the DQS project successfully. However, the same blog post got me in trouble – a serious trouble. After first successful deployment I went ahead and created a few of my own knowledge base and projects. I played around a bit and then decided to get back to real work. Now we had deployed DQS on production server only, so experiment on production server. Now, when I got back to my work, I forgot to close all the windows. My manager found the window open and have seen my test projects. He has asked me to delete my experiments immediately and have said words which I cannot write to you. Here is the problem. I am not able to delete the project which I have created earlier. I am able to open it and play with it but the delete option is disabled and grayed out (see attached image). Now I believe there is nothing wrong with this project as it was just a test project. Would you please write to my manager that it is not harmful to leave that project there as it is? It is also not using any resources. I think he will believe you.” As I said this kind of email makes me uncomfortable. I do not want someone to execute anything on production server. I often write notes and disclaimer on my post when something is dangerous to execute on production server. However, if someone is not expert with SQL Server and attempts something new on production server, I think the major issue is here with the person (admin) who gave new developer permission to production server. This has to be carefully avoided. Here was my response to the individual. “I cannot write to your manager anything as he has not asked me anything. Honestly I believe he is correct in his behavior as you should have not executed anything on the production server without prior approval and testing on the development server. Any R&D must be done on local box or development box. I suggest you request your manager to prevent access to users who does not need access. If he is a good manager, he might have already implemented by now recent event. I also see your screenshot. Here is the issue: While you were playing with project, you might have closed the project half the way, without completing it. Due to the same reason it is locked. You can open and continue from the same place where you have left the project. If you do not need the project any more. Right click on it, click on unlock the project. This will enable the DELETE option and now you can delete the project. Next time, be safe out there. It may be dangerous to have admin access to production server when not needed.“ I have yet not heard from him but I believe he will take my words positively. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Wireless networks are not detected at start up in Ubuntu 12.04

    - by Kanhaiya Mishra
    I have recently (three four days ago) installed Ubuntu 12.04 via windows installer i.e. wubi.exe. After the installation completed wireless and Ethernet were both working well. But after restart wireless networks didn't show up while in the network manager both networking and wireless were enabled. Though sometimes after boot it did show the networks available but very rarely. So I went through various posts regarding wireless issues in Ubuntu 12.04 and tried so many things but ended up in nothing satisfactory. I have Broadcom 4313 LAN network controller and brcmsmac driver. Then relying on some suggestions I tried to install bcm-wl driver but couldn't install due to some error in jockeyl.log file. Then i tried fresh installation of the same driver but still could resolve the startup issues with wireless. Then again I reinstalled Ubuntu inside windows using wubi installer. This time again same problem occurred after boot. But this time I successfully installed wl driver before disturbing file-system files of Ubuntu. But again the same issue. This time I noticed some new things: If I inserted Ethernet/LAN cable before startup then wireless networks are available and of course LAN(wired) networks also work. but if i don't plug in cable before startup and then plug it after startup then it didn't detect Ethernet network neither wireless. So I haven't noticed it before that LAN along with wifi also doesn't work after startup. But if i suspend the session and make it sleep and again login then it worked. I tried it every time that WLAN worked perfectly. But still i m unable to resolve that startup problem. Each time i boot first I have to suspend it once then only networks are available. It irritates me each time i reboot/boot my lappy. So please help out of this problem. Any ideas/help regarding this issue would be highly appreciated. Some of the commands that i run gave following results: # lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 12) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 12) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 06) 00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 06) 00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 06) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 06) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 06) 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 04:00.0 Ethernet controller: Atheros Communications Inc. AR8152 v1.1 Fast Ethernet (rev c1) ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) ff:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) ff:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) # sudo lshw -C network *-network description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: eth1 version: 01 serial: 70:f1:a1:49:b6:ab width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 ip=192.168.1.7 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:f0500000-f0503fff *-network description: Ethernet interface product: AR8152 v1.1 Fast Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: c1 serial: b8:ac:6f:6b:f7:4a capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:44 memory:f0400000-f043ffff ioport:2000(size=128) # lsmod | grep wl wl 2568210 0 lib80211 14381 2 lib80211_crypt_tkip,wl # sudo iwlist eth1 scanning eth1 Scan completed : Cell 01 - Address: 30:46:9A:85:DA:9A ESSID:"BH DASHIR 2" Mode:Managed Frequency:2.462 GHz (Channel 11) Quality:4/5 Signal level:-60 dBm Noise level:-98 dBm IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK IE: Unknown: DD7F0050F204104A00011010440001021041000100103B000103104700109AFE7D908F8E2D381860668BA2E8D8771021000D4E4554474541522C20496E632E10230009574752363134763130102400095747523631347631301042000538333235381054000800060050F204000110110009574752363134763130100800020084 Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s; 6 Mb/s; 9 Mb/s 12 Mb/s; 48 Mb/s Cell 02 - Address: C0:3F:0E:EB:45:14 ESSID:"BH DASHIR 3" Mode:Managed Frequency:2.462 GHz (Channel 11) Quality:2/5 Signal level:-71 dBm Noise level:-98 dBm IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK IE: Unknown: DD7F0050F204104A00011010440001021041000100103B00010310470010F3C9BBE499D140540F530E7EBEDE2F671021000D4E4554474541522C20496E632E10230009574752363134763130102400095747523631347631301042000538333235381054000800060050F204000110110009574752363134763130100800020084 Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s; 6 Mb/s; 9 Mb/s 12 Mb/s; 48 Mb/s Cell 03 - Address: A0:21:B7:A8:2F:C0 ESSID:"BH DASHIR 4" Mode:Managed Frequency:2.422 GHz (Channel 3) Quality:1/5 Signal level:-86 dBm Noise level:-98 dBm IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK IE: Unknown: DD8B0050F204104A0001101044000102103B0001031047001000000000000010000000A021B7A82FC01021000D4E6574676561722C20496E632E10230009574E523130303076321024000456324831104200046E6F6E651054000800060050F20400011011001B574E5231303030763228576972656C6573732041502D322E344729100800020086103C000103 Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s 48 Mb/s; 54 Mb/s

    Read the article

  • Experimenting with other search engines

    - by Bill Graziano
    I’ve been a Google user so long I can hardly remember what I used before it.  Alta Vista maybe?  Or Yahoo.  I’ve tried Bing off and on but it never really stuck.  I probably care more about search engines than your average user because of their impact on SQLTeam.com.  Lately I’ve been trying two other search engines and actually switched to one of them. I’ve played with Blekko a little in the past.  They have some interesting ways to “slice up” your results.  For example, searching on “SQL Server /blogs /date” should just search all the recently updated blogs.  Those two extra words on the search are slashtags.  The full list of slashtags runs from /forums to just see forums to /twitter to /nikon to /reviews and on and on and on.  I laughed when I saw they had slashtags for both liberal and conservative.  I’d hate to find any search results that don’t match my existing worldview :)  You can also create your own slashtags.  I created a mini-search engine for the SQL Server blogs that I read.  You can search it for “backup” at http://blekko.com/ws/backup+/billgraziano/sql-sites.  I uploaded my OPML and it limited the search to just those sites.  It seems like the site is focusing more on curating results and less on algorithms.  This is an interesting site for those power searchers.  There are some great ways to curate results using slashtags.  For 99% of my searches (type words, click on one of the first few links) slashtags are overkill.  They do have some good information on page and site ranking though so I’ll probably send some time looking through that. Blekko recently got my attention again when they said they were banning “content farms” - and that includes eHow and experts-exchange.  I always feel used when I click on a link to EE and find myself scrolling all the way to the bottom to see if I can find the answer.  Sometimes it’s there but sometimes it tells me I need to pay first.  I’ve longed for a way to always exclude certain sites.  Blekko might be taking a hammer to a problem that needs a scalpel but it’s an interesting choice.  (And some of the comments in the TechCrunch link are interesting if you’re a search nerd.) DuckDuckGo is an odd name for a search engine.  Their big hook is that they don’t have search history.  If you wade through your Google account you can probably find the page where it stores your search history.  It was pretty enlightening to find mine.  It was easy to disable but that got me started looking at other search engines.  DDG (or DukGo) just feels like Google used to in the old days.  The results are good enough and the site is fast. Searches will return a snippet from WikiPedia or other site (like StackOverflow) at the top.  I think the idea is to answer the question without needing to visit the site.  I’m not sure that’s a good thing for SQLTeam.com. The only thing I really miss is image search.  You can add a “!i” at the end of any search and it will search the images on Bing.  Bing doesn’t have a great image search but it works for most of what I need.  They call these exclamation marks “!bangs” and they are kinda, sorta like slashtags.  I’ve been using DuckDuckGo now for a few weeks and I’m pretty happy with it.  I use Chrome for my browser and it was an easy switch to make.  It’s still a little surprising seeing my search results come up in a different format.  I’m starting to get used to it though.

    Read the article

  • The Future of M2M in a Connected World

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} There is no denying that the technological landscape as we know it is drastically changing all thanks to three little words – Machine to Machine. The M2M platform has taken over as one of the industry’s main buzz words and there is no question as to why! Just 5 months ago we had a guest post on “Machine to Machine – The Internet of Things – It’s about the Data.” Now companies are extending the use of M2M data to increase opportunity and intelligence across the Enterprise. Just this week, Oracle announced the results of its “Designing an M2M Platform for the Connected World” research, examining the evolving drivers behind ‘Machine to Machine’ (M2M) projects and how those changes are impacting solution requirements. Be sure to read this exciting report here! To Infinity and Beyond, The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Is it possible to run dhcpd3 as non-root user in a chroot jail?

    - by Lenain
    Hi everyone. I would like to run dhcpd3 from a chroot jail on Debian Lenny. At the moment, I can run it as root from my jail. Now I want to do this as non-root user (as "-u blah -t /path/to/jail" Bind option). If I start my process like this : start-stop-daemon --chroot /home/jails/dhcp --chuid dhcp \ --start --pidfile /home/jails/dhcp/var/run/dhcp.pid --exec /usr/sbin/dhcpd3 I get stuck with these errors : Internet Systems Consortium DHCP Server V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ unable to create icmp socket: Operation not permitted Wrote 0 deleted host decls to leases file. Wrote 0 new dynamic host decls to leases file. Wrote 0 leases to leases file. Open a socket for LPF: Operation not permitted strace : brk(0) = 0x911b000 fcntl64(0, F_GETFD) = 0 fcntl64(1, F_GETFD) = 0 fcntl64(2, F_GETFD) = 0 access("/etc/suid-debug", F_OK) = -1 ENOENT (No such file or directory) access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb775d000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = -1 ENOENT (No such file or directory) open("/lib/tls/i686/cmov/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/tls/i686/cmov", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/tls/i686/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/tls/i686", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/tls/cmov/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/tls/cmov", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/tls/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/tls", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/i686/cmov/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i686/cmov", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/i686/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i686", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/cmov/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/cmov", 0xbfc2ac84) = -1 ENOENT (No such file or directory) open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\260e\1\0004\0\0\0t"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=1294572, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb775c000 mmap2(NULL, 1300080, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb761e000 mmap2(0xb7756000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x138) = 0xb7756000 mmap2(0xb7759000, 9840, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7759000 close(3) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb761d000 set_thread_area({entry_number:-1 - 6, base_addr:0xb761d6b0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}) = 0 mprotect(0xb7756000, 4096, PROT_READ) = 0 open("/dev/null", O_RDWR) = 3 close(3) = 0 brk(0) = 0x911b000 brk(0x913c000) = 0x913c000 socket(PF_FILE, SOCK_DGRAM, 0) = 3 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 connect(3, {sa_family=AF_FILE, path="/dev/log"...}, 110) = 0 time(NULL) = 1284760816 open("/etc/localtime", O_RDONLY) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 fstat64(4, {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb761c000 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\f\0\0\0\f\0\0\0\0\0"..., 4096) = 2945 _llseek(4, -28, [2917], SEEK_CUR) = 0 read(4, "\nCET-1CEST,M3.5.0,M10.5.0/3\n"..., 4096) = 28 close(4) = 0 munmap(0xb761c000, 4096) = 0 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Intern"..., 73, MSG_NOSIGNAL) = 73 write(2, "Internet Systems Consortium DHCP "..., 46Internet Systems Consortium DHCP Server V3.1.1) = 46 write(2, "\n"..., 1 ) = 1 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Copyri"..., 75, MSG_NOSIGNAL) = 75 write(2, "Copyright 2004-2008 Internet Syst"..., 48Copyright 2004-2008 Internet Systems Consortium.) = 48 write(2, "\n"..., 1 ) = 1 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: All ri"..., 47, MSG_NOSIGNAL) = 47 write(2, "All rights reserved."..., 20All rights reserved.) = 20 write(2, "\n"..., 1 ) = 1 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: For in"..., 77, MSG_NOSIGNAL) = 77 write(2, "For info, please visit http://www"..., 50For info, please visit http://www.isc.org/sw/dhcp/) = 50 write(2, "\n"..., 1 ) = 1 socket(PF_FILE, SOCK_STREAM, 0) = 4 fcntl64(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(4) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 4 fcntl64(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(4) = 0 open("/etc/nsswitch.conf", O_RDONLY) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=475, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb761c000 read(4, "# /etc/nsswitch.conf\n#\n# Example "..., 4096) = 475 read(4, ""..., 4096) = 0 close(4) = 0 munmap(0xb761c000, 4096) = 0 open("/lib/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/lib/tls/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/tls/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/tls/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/tls/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/tls/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/tls/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/tls/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/tls", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/tls/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/tls/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/tls/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/tls/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/tls/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/tls/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/tls/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/tls", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/i486-linux-gnu/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/lib/i486-linux-gnu", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/tls/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/tls/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/tls/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/tls/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/tls/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/tls/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/tls/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/tls", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/i686/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/i686/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/i686/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/i686", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/cmov/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu/cmov", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/usr/lib/i486-linux-gnu/libnss_db.so.2", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/lib/i486-linux-gnu", 0xbfc2ad5c) = -1 ENOENT (No such file or directory) open("/lib/libnss_files.so.2", O_RDONLY) = 4 read(4, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\320\30\0\0004\0\0\0\250"..., 512) = 512 fstat64(4, {st_mode=S_IFREG|0644, st_size=38408, ...}) = 0 mmap2(NULL, 41624, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0xb7612000 mmap2(0xb761b000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x8) = 0xb761b000 close(4) = 0 open("/etc/services", O_RDONLY|O_CLOEXEC) = 4 fcntl64(4, F_GETFD) = 0x1 (flags FD_CLOEXEC) fstat64(4, {st_mode=S_IFREG|0644, st_size=18480, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7611000 read(4, "# Network services, Internet styl"..., 4096) = 4096 read(4, "9/tcp\t\t\t\t# Quick Mail Transfer Pr"..., 4096) = 4096 read(4, "note\t1352/tcp\tlotusnotes\t# Lotus "..., 4096) = 4096 read(4, "tion\nafs3-kaserver\t7004/udp\nafs3-"..., 4096) = 4096 read(4, "backup\t2989/tcp\t\t\t# Afmbackup sys"..., 4096) = 2096 read(4, ""..., 4096) = 0 close(4) = 0 munmap(0xb7611000, 4096) = 0 time(NULL) = 1284760816 open("/etc/protocols", O_RDONLY|O_CLOEXEC) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=2626, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7611000 read(4, "# Internet (IP) protocols\n#\n# Upd"..., 4096) = 2626 close(4) = 0 munmap(0xb7611000, 4096) = 0 socket(PF_INET, SOCK_RAW, IPPROTO_ICMP) = -1 EPERM (Operation not permitted) time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: unable"..., 80, MSG_NOSIGNAL) = 80 write(2, "unable to create icmp socket: Ope"..., 53unable to create icmp socket: Operation not permitted) = 53 write(2, "\n"..., 1 ) = 1 open("/etc/dhcp3/dhcpd.conf", O_RDONLY) = 4 lseek(4, 0, SEEK_END) = 1426 lseek(4, 0, SEEK_SET) = 0 read(4, "#----------------------------\n# G"..., 1426) = 1426 close(4) = 0 mmap2(NULL, 401408, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb75b0000 mmap2(NULL, 401408, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb754e000 mmap2(NULL, 401408, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb74ec000 brk(0x916f000) = 0x916f000 close(3) = 0 socket(PF_FILE, SOCK_DGRAM, 0) = 3 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 connect(3, {sa_family=AF_FILE, path="/dev/log"...}, 110) = 0 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Inter"..., 74, MSG_NOSIGNAL) = 74 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Copyr"..., 76, MSG_NOSIGNAL) = 76 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: All r"..., 48, MSG_NOSIGNAL) = 48 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: For i"..., 78, MSG_NOSIGNAL) = 78 open("/var/lib/dhcp3/dhcpd.leases", O_RDONLY) = 4 lseek(4, 0, SEEK_END) = 126 lseek(4, 0, SEEK_SET) = 0 read(4, "# The format of this file is docu"..., 126) = 126 close(4) = 0 open("/var/lib/dhcp3/dhcpd.leases", O_WRONLY|O_CREAT|O_APPEND, 0666) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=126, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb74eb000 fstat64(4, {st_mode=S_IFREG|0644, st_size=126, ...}) = 0 _llseek(4, 126, [126], SEEK_SET) = 0 time(NULL) = 1284760816 time(NULL) = 1284760816 open("/var/lib/dhcp3/dhcpd.leases.1284760816", O_WRONLY|O_CREAT|O_TRUNC, 0664) = 5 fcntl64(5, F_GETFL) = 0x1 (flags O_WRONLY) fstat64(5, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb74ea000 _llseek(5, 0, [0], SEEK_CUR) = 0 close(4) = 0 munmap(0xb74eb000, 4096) = 0 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Wrote"..., 70, MSG_NOSIGNAL) = 70 write(2, "Wrote 0 deleted host decls to lea"..., 42Wrote 0 deleted host decls to leases file.) = 42 write(2, "\n"..., 1 ) = 1 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Wrote"..., 74, MSG_NOSIGNAL) = 74 write(2, "Wrote 0 new dynamic host decls to"..., 46Wrote 0 new dynamic host decls to leases file.) = 46 write(2, "\n"..., 1 ) = 1 time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Wrote"..., 58, MSG_NOSIGNAL) = 58 write(2, "Wrote 0 leases to leases file."..., 30Wrote 0 leases to leases file.) = 30 write(2, "\n"..., 1 ) = 1 write(5, "# The format of this file is docu"..., 126) = 126 fsync(5) = 0 unlink("/var/lib/dhcp3/dhcpd.leases~") = 0 link("/var/lib/dhcp3/dhcpd.leases", "/var/lib/dhcp3/dhcpd.leases~") = 0 rename("/var/lib/dhcp3/dhcpd.leases.1284760816", "/var/lib/dhcp3/dhcpd.leases") = 0 socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4 ioctl(4, SIOCGIFCONF, {0 - 64, NULL}) = 0 ioctl(4, SIOCGIFCONF, {64, {{"lo", {AF_INET, inet_addr("127.0.0.1")}}, {"eth0", {AF_INET, inet_addr("192.168.0.10")}}}}) = 0 ioctl(4, SIOCGIFFLAGS, {ifr_name="lo", ifr_flags=IFF_UP|IFF_LOOPBACK|IFF_RUNNING}) = 0 ioctl(4, SIOCGIFFLAGS, {ifr_name="eth0", ifr_flags=IFF_UP|IFF_BROADCAST|IFF_RUNNING|IFF_MULTICAST}) = 0 ioctl(4, SIOCGIFHWADDR, {ifr_name="eth0", ifr_hwaddr=00:c0:26:87:55:c0}) = 0 socket(PF_PACKET, SOCK_PACKET, 768) = -1 EPERM (Operation not permitted) time(NULL) = 1284760816 stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2945, ...}) = 0 send(3, "Sep 18 00:00:16 dhcpd: Open "..., 74, MSG_NOSIGNAL) = 74 write(2, "Open a socket for LPF: Operation "..., 46Open a socket for LPF: Operation not permitted) = 46 write(2, "\n"..., 1 ) = 1 exit_group(1) = ? I understand that dhcpd wants to create sockets on port 67... but I don't know how to authorize that through the chroot. Any idea?

    Read the article

  • NSTextField autocomplete

    - by Rasmus Styrk
    Does anyone know of any class or lib that can implement autocompletion to an NSTextField? I'am trying to get the standard autocmpletion to work but it is made as a synchronous api. I get my autocompletion words via an api call over the internet. What have i done so far is: - (void)controlTextDidChange:(NSNotification *)obj { if([obj object] == self.searchField) { [self.spinner startAnimation:nil]; [self.wordcompletionStore completeString:self.searchField.stringValue]; if(self.doingAutocomplete) return; else { self.doingAutocomplete = YES; [[[obj userInfo] objectForKey:@"NSFieldEditor"] complete:nil]; } } } When my store is done, i have a delegate that gets called: - (void) completionStore:(WordcompletionStore *)store didFinishWithWords:(NSArray *)arrayOfWords { [self.spinner stopAnimation:nil]; self.completions = arrayOfWords; self.doingAutocomplete = NO; } The code that returns the completion list to the nstextfield is: - (NSArray *)control:(NSControl *)control textView:(NSTextView *)textView completions:(NSArray *)words forPartialWordRange:(NSRange)charRange indexOfSelectedItem:(NSInteger *)index { *index = -1; return self.completions; } My problem is that this will always be 1 request behind and the completion list only shows on every 2nd char the user inputs. I have tried searching google and SO like a mad man but i cant seem to find any solutions.. Any help is much appreciated.

    Read the article

  • Embedding IronPython in a WinForms app and interrupting execution

    - by namenlos
    BACKGROUND I've successfully embedded IronPython in my WinForm apps using techniques like the one described here: http://blog.peterlesliemorris.com/archive/2010/05/19/embedding-ironpython-into-a-c-application.aspx In the context of the embedding, my user may any write loops, etc. I'm using the IronPython 2.6 (the IronPython for .NET 2.0 and IronPython for .NET 4.0) MY PROBLEM Sometimes the users will need to interrupt the execution of their code In other words they need something like the ability to hit CTRL-C to halt execution when running Python or IronPython from the cmdline I want to add a button to the winform that when pressed halts the execution, but I'm not sure how to do this. MY PROBLEM The users will need to interrupt the execution of their code In other words they need something like the ability to hit CTRL-C to halt execution when running Python or IronPython from the cmdline MY QUESTION How can I make it to that pressing the a "stop" button will actually halt the execution of the using entered IronPython code? NOTES Note: I don't wont to simply through away that "session" - I still want the user to be able to interact with session and access any results that were available before it was halted. I am assuming I will need to execute this in a separate thread, any guidance or sample code in doing this correctly will be appreciated.

    Read the article

  • SQLite table does not exist exception for existing SQLite database (and table)

    - by SK9
    I've followed the instructions given here for introducing an existing SQLite database to your Android app. When I query the table "android_metadata" this is fine. But when I run a similar query on my own table "words" (which has _id for primary integer key) I get a table does not exist exception and the app crashes. Why is that? Code: Cursor c = myDatabase.query("android_metadata", null, null, null, null, null, null, null); works but Cursor c = myDatabase.query("words", null, null, null, null, null, null, null); returns a table does not exist exception. This is how I'm creating the database (the references to paths and filenames are correct): private void copyDatabase() throws IOException{ //Open local db as the input stream InputStream myInput = mContext.getAssets().open(DB_NAME); //Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //Transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); } (Note: To my eyes, the table is there. I'm looking right at it in my SQLite browser.)

    Read the article

  • Emacs Lisp: how to set encoding for call-process

    - by RamyenHead
    I thought I knew how to set coding-system (or encoding): use process-coding-system-alist. Apparently, it's not working. ;; -*- coding: utf-8 -*- (require 'cl) (let ((process-coding-system-alist '("cygwin/bin/bash" . (utf-8-dos . utf-8-unix)))) (setq my-words (list "Lilo" "?_?" "_?" "?_" "?" "Stitch") my-cygwin-bash "C:/cygwin/bin/bash.exe" my-outbuf (get-buffer-create "*my cygwin bash echo test*") ) (with-current-buffer my-outbuf (goto-char (point-max)) (loop for word in my-words do (insert (concat "echo " word "\n")) (call-process my-cygwin-bash nil my-outbuf nil "-c" (concat "echo " word))) ) (display-buffer my-outbuf) ) Running the above code, the output is this: echo Lilo Lilo echo ?_? /usr/bin/bash: -c: line 0: unexpected EOF while looking for matching `"' /usr/bin/bash: -c: line 1: syntax error: unexpected end of file echo _? /usr/bin/bash: -c: line 0: unexpected EOF while looking for matching `"' /usr/bin/bash: -c: line 1: syntax error: unexpected end of file echo ?_ /usr/bin/bash: $'echo \346\267\205?': command not found echo ? /usr/bin/bash: -c: line 0: unexpected EOF while looking for matching `"' /usr/bin/bash: -c: line 1: syntax error: unexpected end of file echo Stitch Stitch Anything sent to cygwin in unicode is failing (MS Windows, Korean).

    Read the article

  • Assistance with building an inverted-index

    - by tipu
    It's part of an information retrieval thing I'm doing for school. The plan is to create a hashmap of words using the the first two letters of the word as a key and any words with the two letters saved as a string value. So, hashmap["ba"] = "bad barley base" Once I'm done tokenizing a line I take that hashmap, serialize it, and append it to the text file named after the key. The idea is that if I take my data and spread it over hundreds of files I'll lessen the time it takes to fulfill a search by lessening the density of each file. The problem I am running into is when I'm making 100+ files in each run it happens to choke on creating a few files for whatever reason and so those entries are empty. Is there any way to make this more efficient? Is it worth continuing this, or should I abandon it? I'd like to mention I'm using PHP. The two languages I know relatively intimately are PHP and Java. I chose PHP because the front end will be very simple to do and I will be able to add features like autocompletion/suggested search without a problem. I also see no benefit in using Java. Any help is appreciated, thanks.

    Read the article

  • Doctrine stupid or me stupid?

    - by ropstah
    I want to use a single Doctrine install on our server and serve multiple websites. Naturally the models should be maintained in the websites' modelsfolder. I have everything up (and not running) like so: Doctrine @ /CustomFramework/Doctrine Websites @ /var/www/vhosts/custom1.com/ /var/www/vhosts/custom2.com/ Generating works fine, all models are delivered in /application_folder/models and /application_folder/models/generated for the correct website. I've added Doctrine::loadModels('path_to_models') in the bootstrap file for each website, and also registered the autoloaded. But.... This is the autoloader code: public static function autoload($className) { if (strpos($className, 'sfYaml') === 0) { require dirname(__FILE__) . '/Parser/sfYaml/' . $className . '.php'; return true; } if (0 !== stripos($className, 'Doctrine_') || class_exists($className, false) || interface_exists($className, false)) { return false; } $class = self::getPath() . DIRECTORY_SEPARATOR . str_replace('_', DIRECTORY_SEPARATOR, $className) . '.php'; if (file_exists($class)) { require $class; return true; } return false; } Am I stupid, or is the autoloader really doing this: $class = self::getPath() . DIRECTORY_SEPARATOR . str_replace('_', DIRECTORY_SEPARATOR, $className) . '.php'; or, in other words: Does it require me to have ALL my generated doctrine classes inside the Doctrine app directory? Or in other words, do I need a single Doctrine installation for each website? I'm getting an error that the BaseXXX class cannot be found. So the autoloading doesn't function correctly. I really hope i'm doing something wrong.. anyone?

    Read the article

  • How to determine the (natural) language of a document?

    - by Robert Petermeier
    I have a set of documents in two languages: English and German. There is no usable meta information about these documents, a program can look at the content only. Based on that, the program has to decide which of the two languages the document is written in. Is there any "standard" algorithm for this problem that can be implemented in a few hours' time? Or alternatively, a free .NET library or toolkit that can do this? I know about LingPipe, but it is Java Not free for "semi-commercial" usage This problem seems to be surprisingly hard. I checked out the Google AJAX Language API (which I found by searching this site first), but it was ridiculously bad. For six web pages in German to which I pointed it only one guess was correct. The other guesses were Swedish, English, Danish and French... A simple approach I came up with is to use a list of stop words. My app already uses such a list for German documents in order to analyze them with Lucene.Net. If my app scans the documents for occurrences of stop words from either language the one with more occurrences would win. A very naive approach, to be sure, but it might be good enough. Unfortunately I don't have the time to become an expert at natural-language processing, although it is an intriguing topic.

    Read the article

  • php regex word boundary matching in utf-8

    - by dontomaso
    Hi, I have the following php code in a utf-8 php file: var_dump(setlocale(LC_CTYPE, 'de_DE.utf8', 'German_Germany.utf-8', 'de_DE', 'german')); var_dump(mb_internal_encoding()); var_dump(mb_internal_encoding('utf-8')); var_dump(mb_internal_encoding()); var_dump(mb_regex_encoding()); var_dump(mb_regex_encoding('utf-8')); var_dump(mb_regex_encoding()); var_dump(preg_replace('/\bweiß\b/iu', 'weiss', 'weißbier')); I would like the last regex to replace only full words and not parts of words. On my windows computer, it returns: string 'German_Germany.1252' (length=19) string 'ISO-8859-1' (length=10) boolean true string 'UTF-8' (length=5) string 'EUC-JP' (length=6) boolean true string 'UTF-8' (length=5) string 'weißbier' (length=9) On the webserver (linux), I get: string(10) "de_DE.utf8" string(10) "ISO-8859-1" bool(true) string(5) "UTF-8" string(10) "ISO-8859-1" bool(true) string(5) "UTF-8" string(9) "weissbier" Thus, the regex works as I expected on windows but not on linux. So the main question is, how should I write my regex to only match at word boundaries? A secondary questions is how I can let windows know that I want to use utf-8 in my php application.

    Read the article

  • Are there any radix/patricia/critbit trees for Python?

    - by Andrew Dalke
    I have about 10,000 words used as a set of inverted indices to about 500,000 documents. Both are normalized so the index is a mapping of integers (word id) to a set of integers (ids of documents which contain the word). My prototype uses Python's set as the obvious data type. When I do a search for a document I find the list of N search words and their corresponding N sets. I want to return the set of documents in the intersection of those N sets. Python's "intersect" method is implemented as a pairwise reduction. I think I can do better with a parallel search of sorted sets, so long as the library offers a fast way to get the next entry after i. I've been looking for something like that for some time. Years ago I wrote PyJudy but I no longer maintain it and I know how much work it would take to get it to a stage where I'm comfortable with it again. I would rather use someone else's well-tested code, and I would like one which supports fast serialization/deserialization. I can't find any, or at least not any with Python bindings. There is avltree which does what I want, but since even the pair-wise set merge take longer than I want, I suspect I want to have all my operations done in C/C++. Do you know of any radix/patricia/critbit tree libraries written as C/C++ extensions for Python? Failing that, what is the most appropriate library which I should wrap? The Judy Array site hasn't been updated in 6 years, with 1.0.5 released in May 2007. (Although it does build cleanly so perhaps It Just Works.)

    Read the article

  • How to split the story in indesing markup language

    - by BBDeveloper
    Hi All, I am learning indesign, I have a doubt in story splitting... I want to split the story if the same story is used in two different text frames, I am extracting idml file and getting the needed data. My question is, I have a single story which is used by two text frames (Threaded text), and these two textframes are available in two different spreads. Story.xml contains full story, half of the story is displayed in 1st textframe, next half is displayed in 2nd textframe and the same story ID is referred in both the textframes. Now I want to split this story into two, so I want to know that how much content of the story is added in 1st and 2nd textframe, I think if I find that how many characters or words or lines present in each textframe then it will easy to split the story. We can determine the word and character count using the following steps in indesign CS4 Place the insertion point in a text frame to view counts for the entire thread of frames (the story), or select text to view counts only for the selected text. Choose Window Info to display the Info panel. Is there any property or attribute in idml (xml files like prefrence.xml or story.xml or spread.xml) to find out the number of characters/words/lines of single textframe. Can anyone help me to solve this issue. Thanks in advance.

    Read the article

  • Executes a function until it returns a nil, collecting its values into a list

    - by Baldur
    I got this idea from XKCD's Hofstadter comic; what's the best way to create a conditional loop in (any) Lisp dialect that executes a function until it returns NIL at which time it collects the returned values into a list. For those who haven't seen the joke, it's goes that Douglas Hofstadter's “eight-word” autobiography consists of only six words: “I'm So Meta, Even This Acronym” containing continuation of the joke: (some odd meta-paraprosdokian?) “Is Meta” — the joke being that the autobiography is actually “I'm So Meta, Even This Acronym Is Meta”. But why not go deeper? Assume the acronymizing function META that creates an acronym from a string and splits it into words, returns NIL if the string contains but one word: (meta "I'm So Meta, Even This Acronym") ? "Is Meta" (meta (meta "I'm So Meta, Even This Acronym")) ? "Im" (meta (meta (meta "I'm So Meta, Even This Acronym"))) ? NIL (meta "GNU is Not UNIX") ? "GNU" (meta (meta "GNU is Not UNIX")) ? NIL Now I'm looking for how to implement a function so that: (so-function #'meta "I'm So Meta, Even This Acronym") ? ("I'm So Meta, Even This Acronym" "Is Meta" "Im") (so-function #'meta "GNU is Not Unix") ? ("GNU is Not Unix" "GNU") What's the best way of doing this?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >