Search Results

Search found 12061 results on 483 pages for 'non printable'.

Page 165/483 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Index page content identical to page 1 of a gallery-type website

    - by WordPress Developer
    I have a gallery type website, e.g. a site that lists blog posts or pictures in a paginated manner. However, I have 2 pages that have identical content: example.com/index.html example.com/page/1 Page 2, 3 and so on have different content naturally. However, for SEO purposes, what is the best way of telling Google that page 1 is identical to index.html? Should I 302 redirect index.html to /page/1 so index.html is non-existent, so to say or should I put a canonical tag in /page/1 (but not on /page/2) that points to index.html?

    Read the article

  • Ubuntu 13.10 AdobeAIR application: Failed to load module "unity-gtk-module"

    - by nobuzz
    I have installed AdobeAIR on 13.10, but am getting the following error messages when using: Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "unity-gtk-module" Has someone faced this issue. I looked at comment giving a possible resolution here , but that didn't help, as I already have those packages installed. Basically to install AdobeAIR, being a 32-bit, I had to install various i386 packages: libgtk2.0-0:i386 libnss3-1d:i386 libnspr4-0d:i386 lib32nss-mdns libxml2:i386 libxslt1.1:i386 When I run the program as root, it opens up properly without errors. But that is not an option. Could it be a problem that necessary modules are not getting loaded on non-root user's environment.

    Read the article

  • How to change ownership of a domain name from "missing" web designer

    - by Stuart
    Hi, We had a website produced a few years ago with a .ORG domain name. The site hadn't grown with our needs, so we've now got a new .co.uk site. Our intention was to transfer the .org address to the new site on completion. Our new site is to go live soon, but the original .ORG site has gone offline (hosting expired I believe as the expiry date for the .ORG is in 2012) and we now discover that the .ORG domain name is registered to the web designer and not to anyone in our organisation. The WHOIS information gives us the technical contact as discountasp.net. What are the steps we can take here? Our primary concern is getting the name servers changed (the current .ORG address goes nowhere) and ultimately we need to transfer ownership? The organisation in question is a community, non-profit organisation, so our pockets are not deep. Thanks in advance for any help. Stuart.

    Read the article

  • perl scripts stdin/pipe reading problem [closed]

    - by user4541
    I have 2 scripts for a task. The 1st outputs lines of data (terminated with RT/LF) to STDOUT now and then. The 2nd keeps reading data from STDIN for further processing in the following way: use strict; my $dataline; while(1) { $dtaline = ""; $dataline = ; until( $dataline ne "") { sleep(1); $dataline = ; } #further processing with a non-empty data line follows # } print "quitting...\n"; I redirect the output from the 1st to the 2nd using pipe as following: perl scrt1 |perl scpt2. But the problem I'm having with these 2 scpts is that it looks like that the 2nd scpt keeps getting the initial load of lines of data from the 1st scpt if there's no data anymore. Wonder if anybody having similar issues can kindly help a bit? Thanks.

    Read the article

  • Iron Speed Designer Review

    While Visual Studio allows developers to get productive fast by providing great design tools for a UI, it still lacks the ability to do smart layouts, data connections and queries. It is in this area that RAD suite of applications can tremendously boost productivity by abstracting away some of these issues and saving developer time to focus on business intelligence instead of data extraction and presentation. When it comes to RAD application suites for managed web applications, there is non better than Iron Speed Designer. The ease with which you can create a data-centric web application and have different reports of your data within minutes are unparalleled. This review delves into what Iron Speed Designer has to offer as well as some of its limitations. Iron Speed works with .NET 2.0, 3.0, 3.5 and even the latest version .NET 4.0. Read More >

    Read the article

  • An Intro to IIS URL Rewrite–plus redirecting URLs to www-Web Pro Week 8 of 52

    - by OWScott
    Today’s video post is an intro to URL Rewrite and the start of a few lessons on this powerful tool.  Additionally I cover how to rewrite URLs to add the www to the domain name for the sake of search engine optimization (SEO). This is week 8 of a 52 week series on various web administration related tasks.  Past and future videos can be found here. I have already written a blog post on this, so for those that prefer to read rather than watch, you can find it here. IIS URL Rewrite–redirecting non-www to www

    Read the article

  • Exam 71-516: Accessing Data with Microsoft .NET Framework 4

    - by Ricardo Peres
    I had the chance to take the beta version of exam 71-516 today. Here are my thoughts on it: first, I was rather annoyed to discover that I will only know if I passed or not about 8 weeks after the beta period expires (July, 02), which probably means September. It was a difficult exam, especially since I don't have any practice on some of the new Entity Framework options. The items covered, from the most covered to the least covered, were: Entity Framework (50-50 for POCO/Non-POCO) LINQ to SQL WCF Data Services Classic ADO.NET (DataSets, DataTables, DataAdapters, TableAdapters, Connections and Commands LINQ to XML Sync Framework (surprise!) All added up, I think it was a difficult exam. My advise is that you practice a lot! I will post the result as soon as I know it.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Visualize flowchart diagram with multiple end symbols

    - by platzhirsch
    I am looking for a standardize way to visualize the following hierarchical logic: The state of the thread is represented by the answers to the hierarchical set of question You can read this listing like a flowchart, you iterate over the questions decide, go one step deeper and so on. Therefore I thought the best way to visualize it, using a flowchart. The problem is, in this hierarchical set it is possible to end in more than one state and its totally valid. I have never seen a flowchart where you can enter more than one state. Is it still possible and I am missing the right symbol to present this logic or are flowchart not fitting anyway? What other graphical representation could I use, is there something fitting in UML? A non-deterministic state machine seems not to be intuitive enough, transfering it into a deterministic state machine would result in to many states, and so on.

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Add AD Domain user to sudoers from the command line

    - by Wyatt Barnett
    I'm setting up an Ubuntu 11.04 server VM for use as a database server. It would make everyone's lives easier if we could have folks login using windows credentials and perhaps even make the machine work with the current AD-driven security we've got elsewhere. The first leg of this was really easy to accomplish -- apt-get install likewise-open and I was pretty much in business. The problem I'm having is getting our admins into the sudoers groups -- I can't seem to get anything to take. I've tried: a) usermod -aG sudoers [username] b) adding the user names in several formats (DOMAIN\user, user@domain) to the sudoers file. None of which seemed to take, I still get told "DOMAIN\user is not in the sudoers file. This incident will be reported." So, how do I add non-local users to the sudoers?

    Read the article

  • How to get the native version of Spotify running?

    - by Dante Ashton
    A while ago Spotify (the streaming music service) came out with a preview for Linux of their client. I had succesfully run it throughout 10.04. Now I'm on 10.10, I can't seem to find it in the package manager, let alone install it. Software Sources gives me this; Failed to fetch http://repository.spotify.com/dists/stable/Release Unable to find expected entry non-free/source/Sources in Meta-index file (malformed Release file?) Some index files failed to download, they have been ignored, or old ones used instead. So...as I'm paying for Spotify what...umm...do I do? :P

    Read the article

  • L'ITC va examiner les mobiles d'HTC, pour répondre à la plainte déposée par Apple

    Mise à jour du 01.04.2010 par Katleen L'ITC va examiner les mobiles d'HTC, pour répondre à la plainte déposée par Apple L'ITC (U.S. International Trade Commission) va venir fourrer son nez dans l'affaire qui oppose Apple à HTC. La commission a en effet décidé de mener enquête en examinant les smartphones produits par le taiwannais. C'est l'entreprise de Steve Jobs qui a fait appel à l'ITC en portant plainte pour usage non-autorisé de ses brevets. Hier, un juge administratif de l'ITC a déclaré prendre possession du cas. Il a désormais 45 jours pour fixer une date de complément d'enquête. Apple demande purement et simplement que les mobiles d'HTC soie...

    Read the article

  • links for 2010-03-16

    - by Bob Rhubart
    @oracle_ace: Anti-Standards "I am a non-absolutist. Never say never or always. Having a few choice 'thou shalt not's' in your standards is ok. Having mostly 'thou shalt not' is creating an anti-standard." -- Lewis "@oracle_ace" Cunningham (tags: oracle otn oracleace standards) Dana Singleterry: OTN Developer Days - Alberta March 18 / Atlanta April 1 ('s Weblog) Dana Singleterry's preview of upcoming OTN Developer Days. (tags: oracle otn events)

    Read the article

  • I'm interested in checking out a stack-oriented programming language. Which one would you recommend?

    - by Anto
    I'm interested in learning a stack-oriented programming language (such as Forth), which one would you recommend? The qualities I want are: You should be able to develop non-trivial software in it, but it mustn't be a great language for that as: I want to learn the language so I can try out a new paradigm (that is, not because I (think) that I will have great use of it). The reason I want to learn another paradigm is that I want to broaden my views on different approaches (learn to think in new ways, different from OOP, functional and structured). The language should let me do that (learn to think differently). The language should have available and good resources to learn from. The resources should also approach stack-oriented programming in a way that you understand the paradigm (after all, I do this for the paradigm).

    Read the article

  • Lancement du nom de domaine .42 par des militants de l'open-source, un projet soutenu par Tristan Nitot mais critiqué par d'autres

    .42 : lancement du premier domaine de premier niveau non-officiel Exemple de fermeture ou première brique d'un internet ouvert ? 42, la réponse universelle à la grande question sur la vie, l'univers et tout le reste est désormais plus qu'une référence de geek. C'est aussi une extension de domaine au même titre que les .com, .fr et autres .org. Plusieurs militants français du logiciel libre (dont Tristan Nitot, président et fondateur de Mozilla Europe) viennent en effet de lancer une initiative audacieuse, celle de tenir tête à l'ICANN, l'organisme américain qui contrôle les serveurs DNS racine (dont la neutralité est remise en cause par certains avec

    Read the article

  • Upgrading in java web development

    - by Vladimir Ivanov
    I'm a java web developer for nearly 3 years. Always trying to learn more and be better but still I feel that the amount of knowledge is not that good as I want. The knowledge in some places still seems to be non-systematic and don't provide a very strong base to solve the problems as good as I want to do it. The example I have is my senior developer, whose solutions are always more efficient and beautiful. So, the question is rather simple and hard the same time. What is the right way to get my knowlege be more systematic and therefore improve it's quality. I understand that there is no practically good answer for the all java programming, so let's focus on the modern java web or nearly web technologies: JSF 2.0 JPA2 and Hibernate as persistence provider Web services and Java SE as a core. What methodologies or books or learning technics lead to the strong knowledge base within the given knowledge area?

    Read the article

  • How do you differentiate between "box," "machine," "computer" and whatever else?

    - by Corey
    There seems to be a few terms for referring to a computer, especially in the tech world. Different terms seem to be used based on technical expertise. When talking with people with some technical knowledge, I'll refer to it as a machine. When talking to non-technical people (family, friends) I'll call it a computer. On the rare occasion I'm talking about servers, I might call it a box, but even then I'll probably still call it a machine. Is that just me, or do there exist rules already for what to call a computer?

    Read the article

  • Oracle allo SMAU 2012 - La strategia CRM e l’approccio alla Customer Experience: perchè le aziende devono servire diversamente i propri clienti.

    - by Silvia Valgoi
    Lo scorso 18 Ottobre Oracle è stata presente all'edizione milanese di SMAU 2012 all'interno della Apps & Cloud Arena. Invitata da AISM (Associazione italiana marketing) Oracle  ha avuto l’opportunità di partecipare attivamente con un intervento all’interno dell’area tematica “Gestire efficacemente i propri clienti attraverso le applicazioni di Customer Relationship Management”. Le molte persone presenti hanno potuto ascoltare dove, secondo Oracle, si genera reale differenziazione del brand – al di là dei processi ormai consolidati di marketing , vendita e servizio al cliente – e dove si posiziona il nuovo valore per il business. Se non hai potuto partecipare guarda qui la presentazione di Oracle. Per maggiori informazioni: Silvia Valgoi

    Read the article

  • Transparent Data Encryption Helps Customers Address Regulatory Compliance

    - by Troy Kitch
    Regulations such as the Payment Card Industry Data Security Standards (PCI DSS), U.S. state security breach notification laws, HIPAA HITECH and more, call for the use of data encryption or redaction to protect sensitive personally identifiable information (PII). From the outset, Oracle has delivered the industry's most advanced technology to safeguard data where it lives—in the database. Oracle provides a comprehensive portfolio of security solutions to ensure data privacy, protect against insider threats, and enable regulatory compliance for both Oracle and non-Oracle Databases. Organizations worldwide rely on Oracle Database Security solutions to help address industry and government regulatory compliance. Specifically, Oracle Advanced Security helps organizations like Educational Testing Service, TransUnion Interactive, Orbitz, and the National Marrow Donor Program comply with privacy and regulatory mandates by transparently encrypting sensitive information such as credit cards, social security numbers, and personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides organizations the most cost-effective solution for comprehensive data protection. Watch the video and learn why organizations choose Oracle Advanced Security with transparent data encryption.

    Read the article

  • How to program for constraints/rules

    - by Gaurav
    First the background, during interviews in the past, many times I have been asked to design some or other variation of card game as programming puzzle, and I have tried to design it in OO way, but I have never been satisfied with my solutions. However it was not until recently that I realized that I had been approaching the problem from the wrong direction. Specifically I was trying to solve the problem by modeling individual card as an object. Problem with this is individual cards don't have any non-trivial intrinsic behavior and therefore are not suitable (or primary) candidate as objects. What is interesting and important about cards are rules and constraints, such as there could be only four suits, or only thirteen cards in each suit. Of course, then there are any number of rules for games. So my questions are Are there any idioms/constructs/patterns to program for rules & constraints. How many in 1 can be applied in conjunction with OO paradigm.

    Read the article

  • Better name of Social Networking ! quora.com

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/07/23/better-name-of-social-networking--quora.com.aspxAfter writing my recent post Facebook allow you to abuse in Non-English words ? I am looking for better social networking site. I found Facebook is not more then time wasting time. People share links and posts. Nowadays Facebook is marketing tool. Twitter is not useful too when you can’t say anything in less then 140 words. Now look at quora.com it’s very good site compare to other two. You can read a lot of discussion their. Too many discussion that you want to listen about. I am happy to use Quora.com no need of Facebook and Twitter. Both are dying already

    Read the article

  • Improving performance of fuzzy string matching against a dictionary [closed]

    - by Nathan Harmston
    Hi, So I'm currently working for with using SecondString for fuzzy string matching, where I have a large dictionary to compare to (with each entry in the dictionary has an associated non-unique identifier). I am currently using a hashMap to store this dictionary. When I want to do fuzzy string matching, I first check to see if the string is in the hashMap and then I iterate through all of the other potential keys, calculating the string similarity and storing the k,v pair/s with the highest similarity. Depending on which dictionary I am using this can take a long time ( 12330 - 1800035 entries ). Is there any way to speed this up or make it faster? I am currently writing a memoization function/table as a way of speeding this up, but can anyone else think of a better way to improve the speed of this? Maybe a different structure or something else I'm missing. Many thanks in advance, Nathan

    Read the article

  • How to Use Steam In-Home Streaming

    - by Chris Hoffman
    Steam’s In-Home Streaming is now available to everyone, allowing you to stream PC games from one PC to another PC on the same local network. Use your gaming PC to power your laptops and home theater system. This feature doesn’t allow you to stream games over the Internet, only the same local network. Even if you tricked Steam, you probably wouldn’t get good streaming performance over the Internet. Why Stream? When you use Steam In-Home streaming, one PC sends its video and audio to another PC. The other PC views the video and audio like it’s watching a movie, sending back mouse, keyboard, and controller input to the other PC. This allows you to have a fast gaming PC power your gaming experience on slower PCs. For example, you could play graphically demanding games on a laptop in another room of your house, even if that laptop has slower integrated graphics. You could connect a slower PC to your television and use your gaming PC without hauling it into a different room in your house. Streaming also enables cross-platform compatibility. You could have a Windows gaming PC and stream games to a Mac or Linux system. This will be Valve’s official solution for compatibility with old Windows-only games on the Linux (Steam OS) Steam Machines arriving later this year. NVIDIA offers their own game streaming solution, but it requires certain NVIDIA graphics hardware and can only stream to an NVIDIA Shield device. How to Get Started In-Home Streaming is simple to use and doesn’t require any complex configuration — or any configuration, really. First, log into the Steam program on a Windows PC. This should ideally be a powerful gaming PC with a powerful CPU and fast graphics hardware. Install the games you want to stream if you haven’t already — you’ll be streaming from your PC, not from Valve’s servers. (Valve will eventually allow you to stream games from Mac OS X, Linux, and Steam OS systems, but that feature isn’t yet available. You can still stream games to these other operating systems.) Next, log into Steam on another computer on the same network with the same Steam username. Both computers have to be on the same subnet of the same local network. You’ll see the games installed on your other PC in the Steam client’s library. Click the Stream button to start streaming a game from your other PC. The game will launch on your host PC, and it will send its audio and video to the PC in front of you. Your input on the client will be sent back to the server. Be sure to update Steam on both computers if you don’t see this feature. Use the Steam > Check for Updates option within Steam and install the latest update. Updating to the latest graphics drivers for your computer’s hardware is always a good idea, too. Improving Performance Here’s what Valve recommends for good streaming performance: Host PC: A quad-core CPU for the computer running the game, minimum. The computer needs enough processor power to run the game, compress the video and audio, and send it over the network with low latency. Streaming Client: A GPU that supports hardware-accelerated H.264 decoding on the client PC. This hardware is included on all recent laptops and PCs. Ifyou have an older PC or netbook, it may not be able to decode the video stream quickly enough. Network Hardware: A wired network connection is ideal. You may have success with wireless N or AC networks with good signals, but this isn’t guaranteed. Game Settings: While streaming a game, visit the game’s setting screen and lower the resolution or turn off VSync to speed things up. In-Home Steaming Settings: On the host PC, click Steam > Settings and select In-Home Streaming to view the In-Home Streaming settings. You can modify your streaming settings to improve performance and reduce latency. Feel free to experiment with the options here and see how they affect performance — they should be self-explanatory. Check Valve’s In-Home Streaming documentation for troubleshooting information. You can also try streaming non-Steam games. Click Games > Add a Non-Steam Game to My Library on your host PC and add a PC game you have installed elsewhere on your system. You can then try streaming it from your client PC. Valve says this “may work but is not officially supported.” Image Credit: Robert Couse-Baker on Flickr, Milestoned on Flickr

    Read the article

  • Changes to keyboard layout resetted on restart

    - by Matthieu Napoli
    I edited /usr/share/X11/xkb/symbols/fr to customize the french-dvorak layout. I then selected french-dvorak layout (instead of french). Now when I restart Ubuntu, I end up with the non-edited french-dvorak (my changes are ignored). But if I switch to french, then back to french-dvorak, my changes are now taken into account... How can I have my custom french-dvorak on startup? Is there some sort of cached version of the keyboard layout? I don't understand how it can switch me to the official french-dvorak because I changed it, so it should no longer exist.

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >