Search Results

Search found 26808 results on 1073 pages for 'storing information'.

Page 541/1073 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • Revolutionary brand powder packing machine price from affecting marketplace boom and put on uniform in addition to a lengthy service life

    - by user74606
    In mining in stone crushing, our machinery company's encounter becomes much more apparent. As a consequence of production capacity in between 600~800t/h of mining stone crusher, stone is mine Mobile Cone Crushing Plant Price 25~40 times, effectively solved the initially mining stone crusher operation because of low yield prices, no upkeep problems. Full chunk of mining stone crusher. Maximum particle size for crushing 1000x1200mm, an effective answer for the original side is mine stone provide, storing significant chunks of stone can not use complications in mines. Completed goods granularity is modest, only 2~15mm, an effective option for the original mine stone size, generally blocking chute production was an issue even the grinding machine. Two types of material mixed great uniformity, desulfurization of mining stone by adding weight considerably. Present quantity added is often reached 60%, effectively minimizing the cost of raw supplies. Electrical energy consumption has fallen. Dropped 1~2KWh/t tons of mining stone electrical energy consumption, annual electricity savings of one hundred,000 yuan. Efficient labor intensity of workers and also the atmosphere. Due to mine stone powder packing machine price a high degree of automation, with out human make contact with supplies, workers working circumstances enhanced significantly. Positive aspects, and along with mine for stone crushing, CS series cone Crusher has the following efficiency traits. CS series cone Crusher Chamber is divided into 3 unique designs, the user is usually chosen in accordance with the scenario on site crushing efficiency is high, uniform item size, grain shape, rolling mortar wall friction and put on uniform in addition to a extended service life of crushing cavity-. CS series cone Crusher utilizes a one of a kind dust-proof seal, sealing dependable, properly extend the service life of the lubricant replacement cycle and parts. CS series Sprial Sand washer price manufacture of important components to choose unique materials. Each and every stroke left rolling mortar wall of broken cone distances, by permitting a lot more products into the crushing cavity, as well as the formation of big discharge volume, speed of supplies by way of the crushing Chamber. This machine makes use of the principle of crushing cavity, also as unique laminated crushing, particle fragmentation, so that the completed product drastically improved the proportions of a cube, needle-shaped stones to lower particle levels extra evenly.

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Domain: Netlogon event sequence

    - by Bob
    I'm getting really confused, reading tutorials from SAMBA howto, which is hell of a mess. Could you write step-by-step, what events happen upon NetLogon? Or in particular, I can't get these things: I really can't get the mechanism of action of LDAP and its role. Should I think of Active Directory LDS as of its superset? What're the other roles of AD and why this term is nearly a synonym of term "domain"? What's the role of LDAP in the remote login sequence? Does it store roaming user profiles? Does it store anything else? How it is called (are there any upper-level or lower-level services that use it in the course of NetLogon)? How do I join a domain. On the client machine I just use the Domain Controller admin credentials, but how do I prepare the Domain Controller for a new machine to join it. What's that deal of Machine trust accounts? How it is used? Suppose, I've just configured a machine to join a domain, created its machine trust, added its data to the domain controller. How would that machine find WINS server to query it for Domain Controller NetBIOS name? Does any computer name, ending with <1C type, correspond to domain controller? In what cases Kerberos and LM/NTLM are used for authentication? Where are password hashes stored in, say, Windows2000 domain controller? Right in the registry? What is SAM - is it a service, responsible for authentication and sending/storing those passwords and accompanying information, such as groups policies etc.? Who calls it? Does it use Active Directory? What's the role of NetBIOS except by name service? Can you exemplify a scenario of its usage as a "datagram distribution service for connectionless communication" or "session service for connection-oriented communication"? (quoted taken from http://en.wikipedia.org/wiki/NetBIOS_Frames_protocol description of NetBIOS roles) Thanks and sorry for many questions.

    Read the article

  • Dell PowerEdge R720 - Corrupted RAID

    - by BT643
    Apologies in advance for the lengthy question. We have a Dell PowerEdge R720 server with: 2 x 136GB SAS drives in RAID 1 for the OS (Ubuntu Server 12.04) 6 x 3TB SATA drives in RAID 5 for data A few days ago we were getting errors when trying to access files on the large RAID 5 partition. We rebooted the server and got a message about the raid controller has found a foriegn config. We've had this before, and just needed to use Dell's RAID configuration utility to import foreign config on the RAID. Last time this worked, but this time, it started doing a disk check then we got this: FSCK has returned the following: "/dev/sdb1 inode 364738 has a bad extended attribute block 7 /dev/sdb1 unexpected inconsistency run fsck manually (i.e without -a or -p options) MOUNTALL fsck /ourdatapartition [1019] terminated with status 4 MOUNTALL filesystem has errors /ourdatapartition errors where found while checking the disk drive for /ourdatapartition Press F to fix errors, I to Ignore or M for Manual Recovery" We pressed F to try and fix the errors, but it eventually errored with: Inode 275841084, i_blocks is 167080, should be 0. Fix? yes Inode 275841141 has an invalid extend node (blk 2206761006, lblk 0) Clear? yes Inode 275841141, i_blocks is 227872, should be 0. Fix? yes Inode 275842303 has an invalid extend node (blk 2206760975, lblk 0) Clear? yes .... Error storing directory block information (inode=275906766, block=0, num=2699516178): Memory allocation failed /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** e2fsck: aborted /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** mountall: fsck /ourdatapartition [1286] terminated with status 9 mountall: Unrecoverable fsck error: /ourdatapartition We noticed one of the drive lights was not lit at all, and thought this may have failed and be the problem. We replaced the drive with a spare, and tried "F" to repair it again, but we keep just getting the same error as above. In the RAID configuration utility, all drives show as "online" and "optimal". We do have this data on another replicated server, so we're not worried about "recovering" anything, we just want to get the system back online asap. The server has 64 or 32GB memory, can't remember off the top of my head, but either way, with a 14TB RAID, I think it may still not be enough. Thanks EDIT - I checked the memory usage while fsck was running as suggested and after 2 or 3 minutes, it looked like this, using up nearly all of our servers memory: When it failed after 5 minutes or so with the error in my post, the memory immediately freed up again:

    Read the article

  • Moving default web site to another drive

    - by Chadworthington
    I set the default location from c:\inetpub\wwwroot to d:\inetpub\wwwroot but when I access my .NET 4.0 site get this error: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 105: Set explicit="true" to force declaration of all variables. Line 106: --> Line 107: <compilation debug="true" strict="true" explicit="true" targetFramework="4.0"> Line 108: <assemblies> Line 109: <add assembly="System.Web.Extensions.Design, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> When I try to Manage the Basic Settings on the Site and click the "Test Settings" button, I see that I have a problem under "authorization:" The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that <domain>\<computer_name>$ has Read access to the physical path. Then test these settings again. 1) Do I need to grant rights to IIS to the new folder? Which user? I thought it was something like IIS_USER or something similar but I cannot determine the correct name of the user. 2) Also, do I need to set the default version of the framework somewhere at the Default Site level or at the Virtual folder level? How is this done in IIS6, I am used to IIS5 or whatever came with XP Pro. 3) My original site had a subfolder under wwwroot called "aspnet_client." How was this cleated? I manually copied it to the corresponding new location. My app was using seperate ASP specific databases for storing session state and role info, if that is relevant. Thanks

    Read the article

  • Puppet and launchd services?

    - by Joel Westberg
    We have a production environment configured with Puppet, and want to be able to set up a similar environment on our development machines: a mix of Red Hats, Ubuntus and OSX. As might be expected, OSX is the odd man out here, and sadly, I'm having a lot of trouble with getting this to work. My first attempt was using macports, using the following declaration: package { 'rabbitmq-server': ensure => installed, provider => macports, } but this, sadly, generates the following error: Error: /Stage[main]/Rabbitmq/Package[rabbitmq-server]: Could not evaluate: Execution of '/opt/local/bin/port -q installed rabbitmq-server' returned 1: usage: cut -b list [-n] [file ...] cut -c list [file ...] cut -f list [-s] [-d delim] [file ...] while executing "exec dscl -q . -read /Users/$env(SUDO_USER) NFSHomeDirectory | cut -d ' ' -f 2" (procedure "mportinit" line 95) invoked from within "mportinit ui_options global_options global_variations" Next up, I figured I'd give homebrew a try. There is no package provider available by default, but puppet-homebrew seemed promising. Here, I got much farther, and actually managed to get the install to work. package { 'rabbitmq': ensure => installed, provider => brew, } file { "plist": path => "/Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist", source => "/usr/local/opt/rabbitmq/homebrew.mxcl.rabbitmq.plist", ensure => present, owner => root, group => wheel, mode => 0644, } service { "homebrew.mxcl.rabbitmq": enable => true, ensure => running, provider => "launchd", require => [ File["/Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist"] ], } Here, I don't get any error. But RabbitMQ doesn't start either (as it does if I do a manual load with launchctl) [... snip ...] Debug: Executing '/bin/launchctl list' Debug: Executing '/usr/bin/plutil -convert xml1 -o /dev/stdout /Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist' Debug: Executing '/usr/bin/plutil -convert xml1 -o /dev/stdout /var/db/launchd.db/com.apple.launchd/overrides.plist' Debug: /Schedule[weekly]: Skipping device resources because running on a host Debug: /Schedule[puppet]: Skipping device resources because running on a host Debug: Finishing transaction 2248294820 Debug: Storing state Debug: Stored state in 0.01 seconds Finished catalog run in 25.90 seconds What am I doing wrong?

    Read the article

  • cd Command Linux and Mystery Flags

    - by Jason R. Mick
    Platform: CentOS 6.2 Shell:tcsh I'm playing around with cd for a BASH script, and noticed the wondrous cd - option, but was left with many questions... Why the cd -? Isn't this redundant with cd ..? EDIT [As FatalError points out, these two commands don't do the same things... so the answer is "no"] Can you delve farther back into your history with - flag, a la in a browser? e.g. When I type cd -, it takes me to my previous directory, but then if I enter that command again, it takes me to the directory I just came from, creating a sort of loop. Is a shorthand for going back multiple levels supported?EDITI realize I can go back with cd .., but was hoping this could be a gateway to a less verbose deep back, e.g. cd -3 vs. cd ../../../ ... hopefully that clarifies what I'm asking....EDIT2As to the current feedback, while .. is a special directory, I don't see a reason why the built-in cd to the terminal couldn't use a shorthand for ../../ ... ../ e.g. cd ..5 or why the built-in also couldn't have a history (a la auto pushd/popd) that could be turned on and used like cd -3. I get that this could be somewhat of security/privacy risk, but I don't see how it's any worst than storing a command history, which most shells/terminals do. The manpage for cd, accessible via man cd and help cd (it's the same for either command), only lists -L and -P flags. However when I type in cd --help it outputs Usage: cd [-plvn][-|<dir>].. Am I right in assuming the other flags and the - (back) option are nonstandard? What are the -n and -v flags for? Both seem to take me back to my home directory, that's all I've been able to figure out via experimentation. A quick read on web resources [1][2] offered just the same sort of info that the man page did and didn't answer my questions. Note: The second Linux-centric resource above claimed cd only had two options (obviously not true in current CentOS) hence my assumption that this functionality could be non-standard.

    Read the article

  • Syncronizing XML file with MySQL database

    - by Fred K
    My company uses an internal management software for storing products. They want to transpose all the products in a MySql database so they can do available their products on the company website. Notice: they will continue to use their own internal software. This software can exports all the products in various file format (including XML). The syncronization not have to be in real time, they are satisfied to syncronize the MySql database once a day (late night). Also, each product in their software has one or more images, then I have to do available also the images on the website. Here is an example of an XML export: <?xml version="1.0" encoding="UTF-8"?> <export_management userid="78643"> <product id="1234"> <version>100</version> <insert_date>2013-12-12 00:00:00</insert_date> <warrenty>true</warrenty> <price>139,00</price> <model> <code>324234345</code> <model>Notredame</model> <color>red</color> <size>XL</size> </model> <internal> <color>green</color> <size>S</size> </internal> <options> <s_option>some option</standard_option> <s_option>some option</standard_option> <extra_option>some option</extra> <extra_option>some option</extra> </options> <images>   </images> </product> </export_management> Some ideas for how can I do it? Or if you have better ideas to do that.

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • Where can these be posted besides the Python Cookbook?

    - by Noctis Skytower
    Whitespace Assembler #! /usr/bin/env python """Assembler.py Compiles a program from "Assembly" folder into "Program" folder. Can be executed directly by double-click or on the command line. Give name of *.WSA file without extension (example: stack_calc).""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 3 $' ################################################################################ import string from Interpreter import INS, MNEMONIC ################################################################################ def parse(code): program = [] process_virtual(program, code) process_control(program) return tuple(program) def process_virtual(program, code): for line, text in enumerate(code.split('\n')): if not text or text[0] == '#': continue if text.startswith('part '): parse_part(program, line, text[5:]) elif text.startswith(' '): parse_code(program, line, text[5:]) else: syntax_error(line) def syntax_error(line): raise SyntaxError('Line ' + str(line + 1)) ################################################################################ def process_control(program): parts = get_parts(program) names = dict(pair for pair in zip(parts, generate_index())) correct_control(program, names) def get_parts(program): parts = [] for ins in program: if isinstance(ins, tuple): ins, arg = ins if ins == INS.PART: if arg in parts: raise NameError('Part definition was found twice: ' + arg) parts.append(arg) return parts def generate_index(): index = 1 while True: yield index index *= -1 if index > 0: index += 1 def correct_control(program, names): for index, ins in enumerate(program): if isinstance(ins, tuple): ins, arg = ins if ins in HAS_LABEL: if arg not in names: raise NameError('Part definition was never found: ' + arg) program[index] = (ins, names[arg]) ################################################################################ def parse_part(program, line, text): if not valid_label(text): syntax_error(line) program.append((INS.PART, text)) def valid_label(text): if not between_quotes(text): return False label = text[1:-1] if not valid_name(label): return False return True def between_quotes(text): if len(text) < 3: return False if text.count('"') != 2: return False if text[0] != '"' or text[-1] != '"': return False return True def valid_name(label): valid_characters = string.ascii_letters + string.digits + '_' valid_set = frozenset(valid_characters) label_set = frozenset(label) if len(label_set - valid_set) != 0: return False return True ################################################################################ from Interpreter import HAS_LABEL, Program NO_ARGS = Program.NO_ARGS HAS_ARG = Program.HAS_ARG TWO_WAY = tuple(set(NO_ARGS) & set(HAS_ARG)) ################################################################################ def parse_code(program, line, text): for ins, word in enumerate(MNEMONIC): if text.startswith(word): check_code(program, line, text[len(word):], ins) break else: syntax_error(line) def check_code(program, line, text, ins): if ins in TWO_WAY: if text: number = parse_number(line, text) program.append((ins, number)) else: program.append(ins) elif ins in HAS_LABEL: text = parse_label(line, text) program.append((ins, text)) elif ins in HAS_ARG: number = parse_number(line, text) program.append((ins, number)) elif ins in NO_ARGS: if text: syntax_error(line) program.append(ins) else: syntax_error(line) def parse_label(line, text): if not text or text[0] != ' ': syntax_error(line) text = text[1:] if not valid_label(text): syntax_error(line) return text ################################################################################ def parse_number(line, text): if not valid_number(text): syntax_error(line) return int(text) def valid_number(text): if len(text) < 2: return False if text[0] != ' ': return False text = text[1:] if '+' in text and '-' in text: return False if '+' in text: if text.count('+') != 1: return False if text[0] != '+': return False text = text[1:] if not text: return False if '-' in text: if text.count('-') != 1: return False if text[0] != '-': return False text = text[1:] if not text: return False valid_set = frozenset(string.digits) value_set = frozenset(text) if len(value_set - valid_set) != 0: return False return True ################################################################################ ################################################################################ from Interpreter import partition_number VMC_2_TRI = { (INS.PUSH, True): (0, 0), (INS.COPY, False): (0, 2, 0), (INS.COPY, True): (0, 1, 0), (INS.SWAP, False): (0, 2, 1), (INS.AWAY, False): (0, 2, 2), (INS.AWAY, True): (0, 1, 2), (INS.ADD, False): (1, 0, 0, 0), (INS.SUB, False): (1, 0, 0, 1), (INS.MUL, False): (1, 0, 0, 2), (INS.DIV, False): (1, 0, 1, 0), (INS.MOD, False): (1, 0, 1, 1), (INS.SET, False): (1, 1, 0), (INS.GET, False): (1, 1, 1), (INS.PART, True): (2, 0, 0), (INS.CALL, True): (2, 0, 1), (INS.GOTO, True): (2, 0, 2), (INS.ZERO, True): (2, 1, 0), (INS.LESS, True): (2, 1, 1), (INS.BACK, False): (2, 1, 2), (INS.EXIT, False): (2, 2, 2), (INS.OCHR, False): (1, 2, 0, 0), (INS.OINT, False): (1, 2, 0, 1), (INS.ICHR, False): (1, 2, 1, 0), (INS.IINT, False): (1, 2, 1, 1) } ################################################################################ def to_trinary(program): trinary_code = [] for ins in program: if isinstance(ins, tuple): ins, arg = ins trinary_code.extend(VMC_2_TRI[(ins, True)]) trinary_code.extend(from_number(arg)) else: trinary_code.extend(VMC_2_TRI[(ins, False)]) return tuple(trinary_code) def from_number(arg): code = [int(arg < 0)] if arg: for bit in reversed(list(partition_number(abs(arg), 2))): code.append(bit) return code + [2] return code + [0, 2] to_ws = lambda trinary: ''.join(' \t\n'[index] for index in trinary) def compile_wsa(source): program = parse(source) trinary = to_trinary(program) ws_code = to_ws(trinary) return ws_code ################################################################################ ################################################################################ import os import sys import time import traceback def main(): name, source, command_line, error = get_source() if not error: start = time.clock() try: ws_code = compile_wsa(source) except: print('ERROR: File could not be compiled.\n') traceback.print_exc() error = True else: path = os.path.join('Programs', name + '.ws') try: open(path, 'w').write(ws_code) except IOError as err: print(err) error = True else: div, mod = divmod((time.clock() - start) * 1000, 1) args = int(div), '{:.3}'.format(mod)[1:] print('DONE: Comipled in {}{} ms'.format(*args)) handle_close(error, command_line) def get_source(): if len(sys.argv) > 1: command_line = True name = sys.argv[1] else: command_line = False try: name = input('Source File: ') except: return None, None, False, True print() path = os.path.join('Assembly', name + '.wsa') try: return name, open(path).read(), command_line, False except IOError as err: print(err) return None, None, command_line, True def handle_close(error, command_line): if error: usage = 'Usage: {} <assembly>'.format(os.path.basename(sys.argv[0])) print('\n{}\n{}'.format('-' * len(usage), usage)) if not command_line: time.sleep(10) ################################################################################ if __name__ == '__main__': main() Whitespace Helpers #! /usr/bin/env python """Helpers.py Includes a function to encode Python strings into my WSA format. Has a "PRINT_LINE" function that can be copied to a WSA program. Contains a "PRINT" function and documentation as an explanation.""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 1 $' ################################################################################ def encode_string(string, addr): print(' push', addr) print(' push', len(string)) print(' set') addr += 1 for offset, character in enumerate(string): print(' push', addr + offset) print(' push', ord(character)) print(' set') ################################################################################ # Prints a string with newline. # push addr # call "PRINT_LINE" """ part "PRINT_LINE" call "PRINT" push 10 ochr back """ ################################################################################ # def print(array): # if len(array) <= 0: # return # offset = 1 # while len(array) - offset >= 0: # ptr = array.ptr + offset # putch(array[ptr]) # offset += 1 """ part "PRINT" # Line 1-2 copy get less "__PRINT_RET_1" copy get zero "__PRINT_RET_1" # Line 3 push 1 # Line 4 part "__PRINT_LOOP" copy copy 2 get swap sub less "__PRINT_RET_2" # Line 5 copy 1 copy 1 add # Line 6 get ochr # Line 7 push 1 add goto "__PRINT_LOOP" part "__PRINT_RET_2" away part "__PRINT_RET_1" away back """ Whitespace Interpreter #! /usr/bin/env python """Interpreter.py Runs programs in "Programs" and creates *.WSO files when needed. Can be executed directly by double-click or on the command line. If run on command line, add "ASM" flag to dump program assembly.""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 4 $' ################################################################################ def test_file(path): disassemble(parse(trinary(load(path))), True) ################################################################################ load = lambda ws: ''.join(c for r in open(ws) for c in r if c in ' \t\n') trinary = lambda ws: tuple(' \t\n'.index(c) for c in ws) ################################################################################ def enum(names): names = names.replace(',', ' ').split() space = dict((reversed(pair) for pair in enumerate(names)), __slots__=()) return type('enum', (object,), space)() INS = enum('''\ PUSH, COPY, SWAP, AWAY, \ ADD, SUB, MUL, DIV, MOD, \ SET, GET, \ PART, CALL, GOTO, ZERO, LESS, BACK, EXIT, \ OCHR, OINT, ICHR, IINT''') ################################################################################ def parse(code): ins = iter(code).__next__ program = [] while True: try: imp = ins() except StopIteration: return tuple(program) if imp == 0: # [Space] parse_stack(ins, program) elif imp == 1: # [Tab] imp = ins() if imp == 0: # [Tab][Space] parse_math(ins, program) elif imp == 1: # [Tab][Tab] parse_heap(ins, program) else: # [Tab][Line] parse_io(ins, program) else: # [Line] parse_flow(ins, program) def parse_number(ins): sign = ins() if sign == 2: raise StopIteration() buffer = '' code = ins() if code == 2: raise StopIteration() while code != 2: buffer += str(code) code = ins() if sign == 1: return int(buffer, 2) * -1 return int(buffer, 2) ################################################################################ def parse_stack(ins, program): code = ins() if code == 0: # [Space] number = parse_number(ins) program.append((INS.PUSH, number)) elif code == 1: # [Tab] code = ins() number = parse_number(ins) if code == 0: # [Tab][Space] program.append((INS.COPY, number)) elif code == 1: # [Tab][Tab] raise StopIteration() else: # [Tab][Line] program.append((INS.AWAY, number)) else: # [Line] code = ins() if code == 0: # [Line][Space] program.append(INS.COPY) elif code == 1: # [Line][Tab] program.append(INS.SWAP) else: # [Line][Line] program.append(INS.AWAY) def parse_math(ins, program): code = ins() if code == 0: # [Space] code = ins() if code == 0: # [Space][Space] program.append(INS.ADD) elif code == 1: # [Space][Tab] program.append(INS.SUB) else: # [Space][Line] program.append(INS.MUL) elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] program.append(INS.DIV) elif code == 1: # [Tab][Tab] program.append(INS.MOD) else: # [Tab][Line] raise StopIteration() else: # [Line] raise StopIteration() def parse_heap(ins, program): code = ins() if code == 0: # [Space] program.append(INS.SET) elif code == 1: # [Tab] program.append(INS.GET) else: # [Line] raise StopIteration() def parse_io(ins, program): code = ins() if code == 0: # [Space] code = ins() if code == 0: # [Space][Space] program.append(INS.OCHR) elif code == 1: # [Space][Tab] program.append(INS.OINT) else: # [Space][Line] raise StopIteration() elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] program.append(INS.ICHR) elif code == 1: # [Tab][Tab] program.append(INS.IINT) else: # [Tab][Line] raise StopIteration() else: # [Line] raise StopIteration() def parse_flow(ins, program): code = ins() if code == 0: # [Space] code = ins() label = parse_number(ins) if code == 0: # [Space][Space] program.append((INS.PART, label)) elif code == 1: # [Space][Tab] program.append((INS.CALL, label)) else: # [Space][Line] program.append((INS.GOTO, label)) elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] label = parse_number(ins) program.append((INS.ZERO, label)) elif code == 1: # [Tab][Tab] label = parse_number(ins) program.append((INS.LESS, label)) else: # [Tab][Line] program.append(INS.BACK) else: # [Line] code = ins() if code == 2: # [Line][Line] program.append(INS.EXIT) else: # [Line][Space] or [Line][Tab] raise StopIteration() ################################################################################ MNEMONIC = '\ push copy swap away add sub mul div mod set get part \ call goto zero less back exit ochr oint ichr iint'.split() HAS_ARG = [getattr(INS, name) for name in 'PUSH COPY AWAY PART CALL GOTO ZERO LESS'.split()] HAS_LABEL = [getattr(INS, name) for name in 'PART CALL GOTO ZERO LESS'.split()] def disassemble(program, names=False): if names: names = create_names(program) for ins in program: if isinstance(ins, tuple): ins, arg = ins assert ins in HAS_ARG has_arg = True else: assert INS.PUSH <= ins <= INS.IINT has_arg = False if ins == INS.PART: if names: print(MNEMONIC[ins], '"' + names[arg] + '"') else: print(MNEMONIC[ins], arg) elif has_arg and ins in HAS_ARG: if ins in HAS_LABEL and names: assert arg in names print(' ' + MNEMONIC[ins], '"' + names[arg] + '"') else: print(' ' + MNEMONIC[ins], arg) else: print(' ' + MNEMONIC[ins]) ################################################################################ def create_names(program): names = {} number = 1 for ins in program: if isinstance(ins, tuple) and ins[0] == INS.PART: label = ins[1] assert label not in names names[label] = number_to_name(number) number += 1 return names def number_to_name(number): name = '' for offset in reversed(list(partition_number(number, 27))): if offset: name += chr(ord('A') + offset - 1) else: name += '_' return name def partition_number(number, base): div, mod = divmod(number, base) yield mod while div: div, mod = divmod(div, base) yield mod ################################################################################ CODE = (' \t\n', ' \n ', ' \t \t\n', ' \n\t', ' \n\n', ' \t\n \t\n', '\t ', '\t \t', '\t \n', '\t \t ', '\t \t\t', '\t\t ', '\t\t\t', '\n \t\n', '\n \t \t\n', '\n \n \t\n', '\n\t \t\n', '\n\t\t \t\n', '\n\t\n', '\n\n\n', '\t\n ', '\t\n \t', '\t\n\t ', '\t\n\t\t') EXAMPLE = ''.join(CODE) ################################################################################ NOTES = '''\ STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint''' ################################################################################ ################################################################################ class Stack: def __init__(self): self.__data = [] # Stack Operators def push(self, number): self.__data.append(number) def copy(self, number=None): if number is None: self.__data.append(self.__data[-1]) else: size = len(self.__data) index = size - number - 1 assert 0 <= index < size self.__data.append(self.__data[index]) def swap(self): self.__data[-2], self.__data[-1] = self.__data[-1], self.__data[-2] def away(self, number=None): if number is None: self.__data.pop() else: size = len(self.__data) index = size - number - 1 assert 0 <= index < size del self.__data[index:-1] # Math Operators def add(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix + suffix) def sub(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix - suffix) def mul(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix * suffix) def div(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix // suffix) def mod(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix % suffix) # Program Operator def pop(self): return self.__data.pop() ################################################################################ class Heap: def __init__(self): self.__data = {} def set_(self, addr, item): if item: self.__data[addr] = item elif addr in self.__data: del self.__data[addr] def get_(self, addr): return self.__data.get(addr, 0) ################################################################################ import os import zlib import msvcrt import pickle import string class CleanExit(Exception): pass NOP = lambda arg: None DEBUG_WHITESPACE = False ################################################################################ class Program: NO_ARGS = INS.COPY, INS.SWAP, INS.AWAY, INS.ADD, \ INS.SUB, INS.MUL, INS.DIV, INS.MOD, \ INS.SET, INS.GET, INS.BACK, INS.EXIT, \ INS.OCHR, INS.OINT, INS.ICHR, INS.IINT HAS_ARG = INS.PUSH, INS.COPY, INS.AWAY, INS.PART, \ INS.CALL, INS.GOTO, INS.ZERO, INS.LESS def __init__(self, code): self.__data = code self.__validate() self.__build_jump() self.__check_jump() self.__setup_exec() def __setup_exec(self): self.__iptr = 0 self.__stck = stack = Stack() self.__heap = Heap() self.__cast = [] self.__meth = (stack.push, stack.copy, stack.swap, stack.away, stack.add, stack.sub, stack.mul, stack.div, stack.mod, self.__set, self.__get, NOP, self.__call, self.__goto, self.__zero, self.__less, self.__back, self.__exit, self.__ochr, self.__oint, self.__ichr, self.__iint) def step(self): ins = self.__data[self.__iptr] self.__iptr += 1 if isinstance(ins, tuple): self.__meth[ins[0]](ins[1]) else: self.__meth[ins]() def run(self): while True: ins = self.__data[self.__iptr] self.__iptr += 1 if isinstance(ins, tuple): self.__meth[ins[0]](ins[1]) else: self.__meth[ins]() def __oint(self): for digit in str(self.__stck.pop()): msvcrt.putwch(digit) def __ichr(self): addr = self.__stck.pop() # Input Routine while msvcrt.kbhit(): msvcrt.getwch() while True: char = msvcrt.getwch() if char in '\x00\xE0': msvcrt.getwch() elif char in string.printable: char = char.replace('\r', '\n') msvcrt.putwch(char) break item = ord(char) # Storing Number self.__heap.set_(addr, item) def __iint(self): addr = self.__stck.pop() # Input Routine while msvcrt.kbhit(): msvcrt.getwch() buff = '' char = msvcrt.getwch() while char != '\r' or not buff: if char in '\x00\xE0': msvcrt.getwch() elif char in '+-' and not buff: msvcrt.putwch(char) buff += char elif '0' <= char <= '9': msvcrt.putwch(char) buff += char elif char == '\b': if buff: buff = buff[:-1] msvcrt.putwch(char) msvcrt.putwch(' ') msvcrt.putwch(char) char = msvcrt.getwch() msvcrt.putwch(char) msvcrt.putwch('\n') item = int(buff) # Storing Number self.__heap.set_(addr, item) def __goto(self, label): self.__iptr = self.__jump[label] def __zero(self, label): if self.__stck.pop() == 0: self.__iptr = self.__jump[label] def __less(self, label): if self.__stck.pop() < 0: self.__iptr = self.__jump[label] def __exit(self): self.__setup_exec() raise CleanExit() def __set(self): item = self.__stck.pop() addr = self.__stck.po

    Read the article

  • Exception: Zend Extension ./filename.php does not exist

    - by safarov
    When i try to execute php file though CLI this message appear: Exception: Zend Extension ./filename.php does not exist But while run like this /usr/local/bin/php -q -r "echo 'test';" works as expected I tried to figure out what causing this, no success yet. Here some information about enviroment may be usefull: I have eaccelerator installed (working ok). In php.ini: zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20100525/eaccelerator.so" eaccelerator.shm_size="16" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="1" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_ttl="0" eaccelerator.shm_prune_period="0" eaccelerator.shm_only="0" Apache and all sites are working Content of filename.php #!/usr/local/bin/php -q <?php echo 'test'; ?> What is the problem ?

    Read the article

  • xf86OpenConsole: Cannot find a free VT: Invalid argument

    - by Oliver Seeliger
    I'v set up an Ubuntu 12.04 from the precreated OpenVZ template. The host system is configured as follows: # $ cat /etc/issue Debian GNU/Linux 6.0 # $ uname -a Linux openvz-02 2.6.32-16-pve #1 SMP Fri Nov 9 11:42:51 CET 2012 x86_64 GNU/Linux # $ apt-cache showpkg proxmox-ve-2.6.32 Package: proxmox-ve-2.6.32 # $ tail -n 3 /etc/apt/sources.list # PVE packages provided by proxmox.com deb http://download.proxmox.com/debian squeeze pve For a software project I need a minimal xserver and followed the instructions at https://help.ubuntu.com/community/ServerGUI. I simply installed the package xorg (xorg 1:7.6+7ubuntu7.1). Now when I 'startx' I get an error message Fatal server error: xf86OpenConsole: Cannot find a free VT: Invalid argument The complete output of startx # startx X.Org X Server 1.11.3 Release Date: 2011-12-16 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu Current Operating System: Linux www 2.6.32-16-pve #1 SMP Fri Nov 9 11:42:51 CET 2012 x86_64 Kernel command line: quiet Build Date: 29 August 2012 12:12:33AM xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 20 08:46:04 2012 (==) Using system config directory "/usr/share/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot find a free VT: Invalid argument Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Server terminated with error (1). Closing log file.

    Read the article

  • 'Certificate types are not available' When creating computer certificate?

    - by Anicho
    Environment Windows Server 2008 sp1 Xeon CPU E5430 @ 2.66 GHz 16.0 GB Ram 64-bit Operating System 1TB Disk Space Server Role: SQL Server Other Information: Joint to domain, Logged in user domain administrator Issue Steps that cause issue: Create a computer certificate using mmc snap-in 'certificates' by right clicking on 'Certificates' folder Under 'root\Personal' tree, and clicking All Tasks - Request New Certificate. Certificate Enrollment window appears, you verify you are connected to your network and you are logged onto the domain. Then Click Next, which leads to a window stating the issue: "Certificate types are not available" "You cannot request a certificate this time because no certificate types are available. If you need a certificate contact your administrator." Wanted Solution Create a certificate on this server, to implement SSL connection to MSSQL servers.

    Read the article

  • CertMgr fails trying to import an SPC file

    - by nsr81
    We have an SPC files which came with the Cisco IP Communicator installer. It needs to be imported into the localMachine ROOT store. However, which the certmgr.exe is run against this SPC file, it errors out. Doesn't matter if it's run from within the installer or manually. The commands I've tried using are: certmgr.exe -add -all CDPcredentials.spc -s -r localMachine root The result displayed is: Error: Failed to save to the destination store CertMgr Failed There is no other information, no log file, nothing in the eventviewer. I's almost as if the ROOT store is in a read-only state. I would also like to point out that I'm able to import single certificates. Just not an SPC files, which contains multiple certificates. I have also tried different versions of the CertMgr utility. Running on Windows 7 Enterprise 64bit. Any assistance would be appreciated.

    Read the article

  • Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name

    - by Tapas Bose
    I have a batch file which starts the Oracle Services net start OracleOraDb11g_home1TNSListener net start OracleServiceORCL call C:\app\Edifixio\product\11.2.0\dbhome_1\BIN\emctl.bat start dbconsole pause But on executing the script I am getting: C:\windows\system32>net start OracleOraDb11g_home1TNSListener The requested service has already been started. More help is available by typing NET HELPMSG 2182. C:\windows\system32>net start OracleServiceORCL The OracleServiceORCL service is starting......... The OracleServiceORCL service was started successfully. C:\windows\system32>call C:\app\Edifixio\product\11.2.0\dbhome_1\BIN\emctl.bat start dbconsole Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name. Press any key to continue . . . I am using Windows 7 64 bit with Oracle 11gR2 64 bit. Any information will be very helpful. Thanks and Regards.

    Read the article

  • Hyper-V and Hyper-threading: On or off?

    - by CapBBeard
    Hey all, With the new Xeon CPUs supporting Hyper-threading, what is the current wisdom with regard to using it (or not) on a Hyper-V host machine? I was originally under the impression that turning it on in a virtual host environment could be detrimental as the 'extra' CPUs were not true cores. However I've also read (unconfirmed) comments along the lines of MS doing some hard work to get Hyper-V running well in a Hyper-threading environment. Does anyone have any solid information or experience in this regard? Cheers!

    Read the article

  • "no MPM loaded", but I'm not even using mpm

    - by ezuk
    Running Apache2 on Ubuntu Precise64 in Vagrant. When I try to start it, it says: vagrant@precise64:/etc/apache2$ /etc/init.d/apache2 start * Starting web server apache2 * * The apache2 configtest failed. Output of config test was: AH00534: apache2: Configuration error: No MPM loaded. Action 'configtest' failed. The Apache error log may have more information. But the thing is, my /etc/apache2/apache2.conf file doesn't call for MPM anywhere! I would paste it here but it would make for a huge post... I tried looking up the error log, but I can't find that anywhere, either. Help?

    Read the article

  • SQL SERVER – 2011 – Wait Type – Day 25 of 28

    - by pinaldave
    Since the beginning of the series, I have been getting the following question again and again: “What are the changes in SQL Server 2011 – Denali with respect to Wait Types?” SQL Server 2011 – Denali is yet to be released, and making statements on the subject will be inappropriate. Denali CTP1 has been released so I suggest that all of you download the same and experiment on it. I quickly compared the wait stats of SQL Server 2008 R2 and Denali (CTP1) and found the following changes: Wait Types Exists in SQL Server 2008 R2 and Not Exists in SQL Server 2011 “Denali” SOS_RESERVEDMEMBLOCKLIST SOS_LOCALALLOCATORLIST QUERY_WAIT_ERRHDL_SERVICE QUERY_ERRHDL_SERVICE_DONE XE_PACKAGE_LOCK_BACKOFF Wait Types Exists in SQL Server 2011 and Not Exists in SQL Server 2008 SLEEP_MASTERMDREADY SOS_MEMORY_TOPLEVELBLOCKALLOCATOR SOS_PHYS_PAGE_CACHE FILESTREAM_WORKITEM_QUEUE FILESTREAM_FILE_OBJECT FILESTREAM_FCB FILESTREAM_CACHE XE_CALLBACK_LIST PWAIT_MD_RELATION_CACHE PWAIT_MD_SERVER_CACHE PWAIT_MD_LOGIN_STATS DISPATCHER_PRIORITY_QUEUE_SEMAPHORE FT_PROPERTYLIST_CACHE SECURITY_KEYRING_RWLOCK BROKER_TRANSMISSION_WORK BROKER_TRANSMISSION_OBJECT BROKER_TRANSMISSION_TABLE BROKER_DISPATCHER BROKER_FORWARDER UCS_MANAGER UCS_TRANSPORT UCS_MEMORY_NOTIFICATION UCS_ENDPOINT_CHANGE UCS_TRANSPORT_STREAM_CHANGE QUERY_TASK_ENQUEUE_MUTEX DBCC_SCALE_OUT_EXPR_CACHE PWAIT_ALL_COMPONENTS_INITIALIZED PREEMPTIVE_SP_SERVER_DIAGNOSTICS SP_SERVER_DIAGNOSTICS_SLEEP SP_SERVER_DIAGNOSTICS_INIT_MUTEX AM_INDBUILD_ALLOCATION QRY_PARALLEL_THREAD_MUTEX FT_MASTER_MERGE_COORDINATOR PWAIT_RESOURCE_SEMAPHORE_FT_PARALLEL_QUERY_SYNC REDO_THREAD_PENDING_WORK REDO_THREAD_SYNC COUNTRECOVERYMGR HADR_DB_COMMAND HADR_TRANSPORT_SESSION HADR_CLUSAPI_CALL PWAIT_HADR_CHANGE_NOTIFIER_TERMINATION_SYNC PWAIT_HADR_ACTION_COMPLETED PWAIT_HADR_OFFLINE_COMPLETED PWAIT_HADR_ONLINE_COMPLETED PWAIT_HADR_FORCEFAILOVER_COMPLETED PWAIT_HADR_WORKITEM_COMPLETED HADR_WORK_POOL HADR_WORK_QUEUE HADR_LOGCAPTURE_SYNC LOGPOOL_CACHESIZE LOGPOOL_FREEPOOLS LOGPOOL_REPLACEMENTSET LOGPOOL_CONSUMERSET LOGPOOL_MGRSET LOGPOOL_CONSUMER LOGPOOLREFCOUNTEDOBJECT_REFDONE HADR_SYNC_COMMIT HADR_AG_MUTEX PWAIT_SECURITY_CACHE_INVALIDATION PWAIT_HADR_SERVER_READY_CONNECTIONS HADR_FILESTREAM_MANAGER HADR_FILESTREAM_BLOCK_FLUSH HADR_FILESTREAM_IOMGR XDES_HISTORY XDES_SNAPSHOT HADR_FILESTREAM_IOMGR_IOCOMPLETION UCS_SESSION_REGISTRATION ENABLE_EMPTY_VERSIONING HADR_DB_OP_START_SYNC HADR_DB_OP_COMPLETION_SYNC HADR_LOGPROGRESS_SYNC HADR_TRANSPORT_DBRLIST HADR_FAILOVER_PARTNER XDESTSVERMGR GHOSTCLEANUPSYNCMGR HADR_AR_UNLOAD_COMPLETED HADR_PARTNER_SYNC HADR_DBSTATECHANGE_SYNC We already know that Wait Types and Wait Stats are going to be the next big thing in the next version of SQL Server. So now I am eagerly waiting to dig deeper in the wait stats. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Severe errors in solr configuration: Error loading class 'solr.TrieDateField'

    - by James Lawruk
    Installed Solr on my Windows XP PC. Tomcat seems to be working fine. Cannot get Solr to work. I noticed the TrieDateField is declared in a file called schema.xml in the SolrHome directory. Any thoughts? The Url http://localhost:8080/solr/ returns: HTTP Status 500 - Severe errors in solr configuration. Check your log files for more detailed information on what may be wrong. If you want solr to continue after configuration errors, change: false in null ------------------------------------------------------------- org.apache.solr.common.SolrException: Unknown fieldtype 'date' specified on field updated Here is an excerpt from the catalina log file: SEVERE: org.apache.solr.common.SolrException: Error loading class 'solr.TrieDateField' at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:273) Caused by: java.lang.ClassNotFoundException: solr.TrieDateField

    Read the article

  • SQL SERVER – Generate Report for Index Physical Statistics – SSMS

    - by pinaldave
    Few days ago, I wrote about SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS (Link). A user asked me a question regarding if we can use similar reports to get the detail about Indexes. Yes, it is possible to do the same. There are similar type of reports are available at Database level, just like those available at the Server Instance level. You can right click on Database name and click Reports. Under Standard Reports, you will find following reports. Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Backup and Restore Events All Transactions All Blocking Transactions Top Transactions by Age Top Transactions by Blocked Transactions Count Top Transactions by Locks Count Resource Locking Statistics by Objects Object Execute Statistics Database Consistency history Index Usage Statistics Index Physical Statistics Schema Change history User Statistics Select the Reports with name Index Physical Statistics. Once click, a report containing all the index names along with other information related to index will be visible, e.g. Index Type and number of partitions. One column that caught my interest was Operation Recommended. In some place, it suggested that index needs to be rebuilt. It is also possible to click and expand the column of partitions and see additional details about index as well. DBA and Developers who just want to have idea about how your index is and its physical statistics can use this tool. Click to Enlarge Note: Please note that I will rebuild my indexes just because this report is recommending it. There are many other parameters you need to consider before rebuilding indexes. However, this tool gives you the accurate stats of your index and it can be right away exported to Excel or PDF writing by clicking on the report. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup

    - by pinaldave
    In recent times I have observed that not many people have proper understanding of what is bookmark lookup or key lookup. Increasing numbers of the questions tells me that this is something developers are encountering every single day but have no idea how to deal with it. I have previously written three articles on this subject. I want to point all of you looking for further information on the same post. SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 SQL SERVER – Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries In one of my recent class we had in depth conversation about what are the alternative of creating covering indexes to remove the bookmark lookup. I really want to this question open to all of you and see what community thinks about the same. Is there any other way then creating covering index or included index to remove his expensive keylookup? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • How to suppress an unwanted external Autodiscover lookup?

    - by chris
    In a small network with Exchange 2007, when starting Outlook 2010 (and once in a while afterwards), users get a prompt to confirm that it's safe to get account configuration information from cpanelemaildiscovery.cpanel.net/autodiscover/autodiscover.xml (I could read in a couple of forums that there is a bug in cpanel, but that's beside the point.) I'm puzzled because I can't find any autodiscover DNS entries anywhere, neither internally nor externally. The only hint is that we use an external hosting company for our website and for one single email address, which runs on cpanel. So I guess that Outlook makes an external DNS query to test all entries? It reates a lot of confusion for the users and frankly I'm not too happy that the external hosting company gets contacted by all our users. How can I suppress this behavior? Thanks

    Read the article

  • Outlook Web Access: "Outlook Web Access has encountered a Web browsing error"

    - by Calum
    When one of my colleagues is accessing Outlook Web Access from IE, he frequently gets an error reported: "Outlook Web Access has encountered a Web browsing error". The error report includes the following: Client Information User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; GTB5; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.04506) CPU Class: x86 Platform: Win32 System Language: en-gb User Language: en-gb CookieEnabled: true Mime Types: Exception Details Date: Tue Apr 6 16:46:54 UTC+0100 2010 Message: Automation server can't create object Url: https://example.com/owa/x.y.z.a/scripts/premium/uglobal.js Line: 85 Any idea as to what might be causing such a problem? The only solution suggested so far is "Reinstall Windows", which he'd rather avoid.

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >