Search Results

Search found 5581 results on 224 pages for 'alignment character'.

Page 182/224 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • StringBuffer behavior in LWJGL

    - by Michael Oberlin
    Okay, I've been programming in Java for about ten years, but am entirely new to LWJGL. I have a specific problem whilst attempting to create a text console. I have built a class meant to abstract input polling to it, which (in theory) captures key presses from the Keyboard object and appends them to a StringBuilder/StringBuffer, then retrieves the completed string after receiving the ENTER key. The problem is, after I trigger the String return (currently with ESCAPE), and attempt to print it to System.out, I consistently get a blank line. I can get an appropriate string length, and I can even sample a single character out of it and get complete accuracy, but it never prints the actual string. I could swear that LWJGL slipped some kind of thread-safety trick in while I wasn't looking. Here's my code: static volatile StringBuffer command = new StringBuffer(); @Override public void chain(InputPoller poller) { this.chain = poller; } @Override public synchronized void poll() { //basic testing for modifier keys, to be used later on boolean shift = false, alt = false, control = false, superkey = false; if(Keyboard.isKeyDown(Keyboard.KEY_LSHIFT) || Keyboard.isKeyDown(Keyboard.KEY_RSHIFT)) shift = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMENU) || Keyboard.isKeyDown(Keyboard.KEY_RMENU)) alt = true; if(Keyboard.isKeyDown(Keyboard.KEY_LCONTROL) || Keyboard.isKeyDown(Keyboard.KEY_RCONTROL)) control = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMETA) || Keyboard.isKeyDown(Keyboard.KEY_RMETA)) superkey = true; while(Keyboard.next()) if(Keyboard.getEventKeyState()) { command.append(Keyboard.getEventCharacter()); } if (Framework.isConsoleEnabled() && Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { System.out.println("Escape down"); System.out.println(command.length() + " characters polled"); //works System.out.println(command.toString().length()); //works System.out.println(command.toString().charAt(4)); //works System.out.println(command.toString().toCharArray()); //blank line! System.out.println(command.toString()); //blank line! Framework.disableConsole(); } //TODO: Add command construction and console management after that } } Maybe the answer's obvious and I'm just feeling tired, but I need to walk away from this for a while. If anyone sees the issue, please let me know. This machine is running the latest release of Java 7 on Ubuntu 12.04, Mate desktop environment. Many thanks.

    Read the article

  • How can I tweak this A* search pathfinding algorithm to handle different terrain movement values?

    - by user422318
    I'm creating a 2D map-based action game with similar interaction design as Diablo II. In other words, the player clicks around a map to move their player. I just finished player movement and am moving on to pathfinding. In the game, enemies should charge the player's character. There are also five different terrain types that give different movement bonuses. I want the AI to take advantage of these terrain bonuses as they try to reach the player. I was told to check out the A* search algorithm (http://en.wikipedia.org/wiki/A*_search_algorithm). I'm doing this game in HTML5 and JavaScript, and found a version in JavaScript: http://www.briangrinstead.com/blog/astar-search-algorithm-in-javascript I'm trying to figure out how to tweak it though. Below are my ideas about what I need to change. What else do I need to worry about? When I create a graph, I will need to initialize the 2D array I pass in passed on with a traversal of a map that corresponds to the different terrain types. in graph.js: "GraphNodeType" definition needs to be modified to handle the 5 terrain types. There will be no walls. in astar.js: The g and h scoring will need to be modified. How should I do this? in astar.js: isWall() should probably be removed. My game doesn't have walls. in astar.js: I'm not sure what this is. I think it indicates a node that isn't valid to be processed. When would this happen, though? At a high level, how do I change this algorithm from "oh, is there a wall there?" to "will this terrain get me to the player faster than the terrain around me?" Because of time, I'm also debating reusing my Bresenham algorithm for the enemies. Unfortunately, the different terrain movement bonuses won't be used by the AI, which will make the game suck. :/ I'd really like to have this in for the prototype, but I'm not a developer by trade nor am I a computer scientist. :D If you know of any code that does what I'm looking for, please share! Sanity check tips for this are also appreciated.

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.   To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings. If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs. #Change log size to 10MB and number of logs to 20FileSizeLimit=10485760FileCountLimit=20 This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

    Read the article

  • How to implement a component based system for items in a web game.

    - by Landstander
    Reading several other questions and answers on using a component based system to define items I want to use one for the items and spells in a web game written in PHP. I'm just stuck on the implementation. I'm going to use a DB schema suggested in this series (part 5 describes the schema); http://t-machine.org/index.php/2007/09/03/entity-systems-are-the-future-of-mmog-development-part-1/ This means I'll have an items table with generic item properties, a table listing all of the components for an item and finally records in each component table used to make up the item. Assuming I can select the first two together in a single query, I'm still going to do N queries for each component type. I'm kind of fine with this because I can cache the data into memcache and check there first before doing any queries. I'll need to build up the items on every request they are used in so the implementation needs to be on the lean side even if they're pulled from memcache. But right there is where I feel confident about implementing a component system for my items ends. I figure I'd need to bring attributes and behaviors into the container from each component it uses. I'm just not sure how to do that effectively and not end up writing a lot of specialized code to deal with each component. For example an AttackComponent might need to know how to filter targets inside of a battle context and also maybe provide an attack behavior. That same item might also have a UsableComponent which allows the item to be used and apply some effect onto a different set of targets filtered differently from the same battle context. Then not every part of an item is an active part, an AttributeBonusComponent might need to only kick in when the item is in an equipped state or when displaying the item details page. Ultimately, how should I bring all of the components together into the container so when I use an item as a weapon I get the correct list of targets? Know when a weapon can also be used as an item? Or to apply the bonuses the item provides to a character object? I feel like I've gone too far down the rabbit hole and I can't grasp onto the simple solution in front of me. (If that makes any sense at all.) Likewise if I were to implement the best answer from here I feel like I'd have a lot of the same questions. How to model multiple "uses" (e.g. weapon) for usable-inventory/object/items (e.g. katana) within a relational database.

    Read the article

  • Working with Windows and Unix

    - by user554629
    Beware of new line characters One of the most frequent issues we encounter in Tech Support is the corruption of files that are transferred between Windows and Unix.   The transfer can occur at any stage, but ultimately involves a transfer of a file using an ftp client that is running on Windows;  it could be ftp or filezilla. Windows uses two characters to mark the end of a line in a text file (CR/LF),carriage return, linefeed.   Unix uses a single character (CR). In all situations, it is best to use binary mode transfer for all files, including ascii text files. Common problems: upload a core file from unix to windows using ftp in ascii mode.The file is going to be larger on Windows than Unix.ftp doesn't know if this is a text file with real line-ends, it takes every ascii CR and transmits two ascii characters CR/LF.The core file, tar file, library ... will be corrupted when transferred to Oracle. download a shell script to Windows, and transfer it to Unix using ftpIf the file is edited on Windows, the unix script line-end chars will be doubled.Unix doesn't know how to handle that, and will likely tell you the script is not executable.Why?  The first line of a shell script ( called "sh-bang" ), identifies the command interpreter the unix shell should use for this script.   Common examples:#/bin/sh#/bin/ksh#/bin/bash#/bin/perl#/bin/sh^M    # will not be understood.#/bin/env ksh # special syntax.  Find ksh and run it dos2unix is a common utility found on most unix platforms, that repairs the issue of Windows LineEnd characters in unix script files.   I've written my own flavor of this utility for use in Tech Support and build environments, that is a bit easier to use, and has some nice side-effects. accepts a list of files:   dos2unix *.sh repairs the file in-place.  Doesn't generate a new file you have to name retains the same timestamp;  it is the encoding that changed, not the file content. Here are the versions of dos2unix for each of the environments we work in.They are compressed with gzip, to avoid the ftp ascii transfer trap,and because I am quite limited in the number of files I can upload to this blog. AIX Linux Solaris sparc  Windows 

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Getting current time in milliseconds

    - by user90293423
    How to get the current time in milliseconds? I'm working on a hacking simulation game and when ever someone connects to another computer/NPC, a login screen popups with a button on the side called BruteForce. When BruteForce is clicked, what i want the program to do is, calculate how many seconds cracking the password is going to take based on the player's CPU speed but that's the easy part. The hard part is i want to enter a character in the password's box every X milliseconds based on a TimeToCrack divided by PasswordLength formula. But since i don't know how to find how many milliseconds have elapsed since the second has passed, the program waits until the CurrentTime is higher than the TimeBeforeTheLoopStarted + HowLongItTakesToTypeaCharacter which is always going to be a second. How would you handle my problems? I've commented the game breaking part. std::vector<QString> hardware = user.getHardware(); QString CPU = hardware[0]; unsigned short Speed = 0; if(CPU == "OMG SingleCore 1.8GHZ"){ Speed = 2; } const short passwordLength = password.length(); /* It's equal to 16 */ int Time = passwordLength / Speed; double TypeSpeed = Time / passwordLength; time_t t = time(0); struct tm * now = localtime(&t); unsigned short EndTime = (now->tm_sec + Time) % 60; unsigned short CurrentTime = 0; short i = passwordLength - 1; do{ t = time(0); now = localtime(&t); CurrentTime = now->tm_sec; do{ t = time(0); now = localtime(&t); }while(now->tm_sec < CurrentTime + TypeSpeed); /* Highly flawed */ /* Do this while your integer value is under this double value */ QString tempPass = password; tempPass.chop(i); ui->lineEdit_2->setText(tempPass); i--; }while(CurrentTime != EndTime);

    Read the article

  • Invalid format for New Relic licence when installing on Elastic Beanstalk

    - by BenFreke
    We've created an app that is running on an Elastic Beanstalk instance, 64 bit PHP version 5.4 (so not legacy). I've used the New Relic installation instructions to install New Relic, and viewing phpinfo shows that New Relic is installed. However, I'm not getting any data in New Relic and that is because it is saying that the licence is ***invalid format*** under newrelic.licence I'm getting the licence from my New Relic account, and it is a 40 character hexadecimal string. Here is the current newrelic.config file in the .ebextensions folder I'm using, with most of the licence key commented out. packages: yum: newrelic-php5: [] rpm: newrelic: http://yum.newrelic.com/pub/newrelic/el5/x86_64/newrelic-repo-5-3.noarch.rpm commands: configure_new_relic: command: newrelic-install install env: NR_INSTALL_SILENT: true NR_INSTALL_KEY: ec9a4... Skitch of relevant phpinfo Can anyone shed some light on what's going on here? I've tried two different New Relic licence keys with the same error, I've also surrounded it with a single quote mark and tried uppercase only. And at this point I'm out of ideas on what to try. We're not AWS gurus so it could very easily be something simple like not opening a port to allow the licence to be validated?

    Read the article

  • Windows 2003 print services for unix causing CUPS "lpd_command returning 1"

    - by Stephen P. Schaefer
    We have several Windows 2003 servers with print services for Unix on them, and which allow Linux machines running CUPS to use printers defined to CUPS with the URI lpd://printer_server/printer_queue_name - they work. An attempt to provide different printers on a different Windows 2003 server with print services for Unix newly enabled causes CUPS to behave like this: a newly defined printer will be in state "Idle". An attempt to print causes CUPS to change the printer state to "Disabled". In /var/log/cups/error_log, the relevant messages appear to be D [01/Dec/2012:06:14:18 -0800] [Job 16] lpd_command 02 hp775cm_ps D [01/Dec/2012:06:14:18 -0800] [Job 16] Sending command string (16 bytes)... D [01/Dec/2012:06:14:18 -0800] [Job 16] Reading command status... D [01/Dec/2012:06:14:18 -0800] [Job 16] lpd_command returning 1 E [01/Dec/2012:06:14:18 -0800] PID 18786 stopped with status 1! Since my Linux boxes can print to other printers via other Windows 2003 print spoolers, I'm wondering what obscure Windows component could be causing this. I don't think it is Windows firewall, since nmap sees the lpd port (515) open on the server. telnet to the server at port 515 declares Connected to server.internal.example.com (10.22.33.44). Escape character is '^]' Connection closed by foreign host. Windows clients successfully print to the CIFS/SMB share of the hp755cm_ps printer. What other reasons are there for Windows to refuse an lpd request?

    Read the article

  • OSX telnet not working from localhost

    - by Stewie
    Telnet from my mac OSx doesn't work while it works from the AWS instances and other networks. I have no local firewalls. Nothing related in /var/log/system In the below transcript, the telnet hangs at "Trying 74.125.136.26..." I have tried many different addresses across multiple domains.. same result ! Rocky-Balboas-MacBook-Pro:~ rocky$ Rocky-Balboas-MacBook-Pro:~ rocky$ dig gmail.com MX +short 30 alt3.gmail-smtp-in.l.google.com. 40 alt4.gmail-smtp-in.l.google.com. 10 alt1.gmail-smtp-in.l.google.com. 20 alt2.gmail-smtp-in.l.google.com. 5 gmail-smtp-in.l.google.com. Rocky-Balboas-MacBook-Pro:~ rocky$ telnet alt2.gmail-smtp-in.l.google.com 25 Trying 74.125.136.26... ^C Rocky-Balboas-MacBook-Pro:~ rocky$ telnet localhost 80 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ^C Connection closed by foreign host. Rocky-Balboas-MacBook-Pro:~ rocky$ edit: let the telnet command run and here are the results telnet: connect to address 74.125.136.27: Operation timed out Trying 2a00:1450:4013:c01::1a... telnet: connect to address 2a00:1450:4013:c01::1a: No route to host telnet: Unable to connect to remote host

    Read the article

  • can't get php mail() working on Ubuntu desktop version with sendmail and postfix

    - by EricP
    I'm running Ubuntu 9.10 LAMP and trying to do a simple email test with PHP and I'm not getting any emails sent. mail("[email protected]", "eric-linux test", "test") or die("can't send mail"); I get no errors from PHP when running that script. In my php.ini file is: sendmail_path = /usr/lib/sendmail -t -i $ sudo ps aux | grep sendmail eric 2486 0.0 0.4 8368 2344 pts/0 T 14:52 0:00 sendmail -s “Hello world” [email protected] eric 8747 0.0 0.3 5692 1616 pts/2 T 16:18 0:00 sendmail eric 8749 0.0 0.3 5692 1636 pts/2 T 16:18 0:00 sendmail start eric 9190 0.0 0.3 5692 1636 pts/2 T 19:12 0:00 sendmail start eric 9192 0.0 0.3 5692 1616 pts/2 T 19:12 0:00 sendmail eric 9425 0.0 0.3 5692 1620 pts/1 T 19:37 0:00 sendmail eric 9427 0.0 0.3 6584 1636 pts/1 T 19:37 0:00 sendmail restart eric 9429 0.0 0.3 5692 1636 pts/1 T 19:38 0:00 /usr/lib/sendmail restart eric 9432 0.0 0.1 3040 804 pts/1 R+ 19:38 0:00 grep --color=auto sendmail When I run $ sendmail start it just hangs there doing nothing. I installed postfix also to see if it would help, but it didn't. I tried to see port 25: eric@eric-linux:~$ telnet localhost 25 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 eric-linux ESMTP Postfix (Ubuntu) thanks

    Read the article

  • Remote Desktop Mobile mangles barcodes coming from scanner

    - by sfonck
    We have an application here using handhelds to scan barcodes. These handhelds are actually making a remote desktop session towards a server where the application runs. Works fine. Now we have bought some new Motorola MC55's running 'Windows Mobile 6.1 Classic', and when using the application over remote desktop: it mangles the characters of the barcodes.... I already tried following things: When scanning a barcode on the MC55 itself it is displayed correctly When scanning a barcode via the remote desktop into a notepad session it is incorrect. Played with all options of the 'Remote Desktop Mobile' - no result Disabled 'autocorrect' and 'suggest words when entering text' on the input settings - no result The strange things is: a barcode which consists of only numbers gets scanned correctly the mangled characters comes through in lower case For some codes \t is mangled in between (should normally be entered after the barcode) e.g.: 'PERIN4' becomes 'ERINp4' 'MGZB' becomes 'GZB m' 'BAK664' becomes 'AK664 b' 'MAGBFA01' becomes 'AGBFmA01' '5021879949500' gets scanned correctly Final solution: Suppllier of the handhelds said the handheld was sending the characters too fast over the remote desktop connection. They changed the handheld to wait for 50ms between sending each character, which produced correct results right now. Scanning a barcode became somewhat slower but it's almost not remarkable to endusers.

    Read the article

  • How can I recover data from a dmg that cannot mount?

    - by Benjamin Lee
    I backed up a hard drive in a dmg, then reformatted the hard drive. (I had deleted the efi partition before which was preventing me from reinstalling the operation system). When I tried use the restore function in disk utility it gave an input/output error. I get this error with anything I do to the image including mounting, converting, attaching, verifying, scanning, and getting info through hdiutil imageinfo. I have run all of these with hdiutil and the -noverify, -nomount, and -ignorebadchecksums flags. When I copy the image onto another disk/partition I get a different error: something like "No filesystem". I cannot repair the image with disk utility or asr, which both throw the I/O error. When I put the -verbose flag on the command I actually get a different error: "hdiutil: attach failed - No child processes". I have output from both the -verbose and -debug flags but it is fairly long so I had to attach a link to avoid the 3000 character limit. No recovery system can get the data because it is both compressed and unmountable. How can I get the data back, and what has gone wrong? -debug -verbose

    Read the article

  • Problems configuring an SSH tunnel to a Nexentastor appliance for use with headless Crashplan

    - by Rob Smallshire
    Problem I am attempting to configure an SSH tunnel to a NexentaStor appliance from either a Windows or Linux computer so that I can connect a Crashplan Desktop GUI to a headless Crashplan server running on the Nexenta box, according to these instructions on the Crashplan support site: Connect to a Headless CrashPlan Desktop. So far, I've failed to get a working SSH tunnel from from either either a Windows client (using Putty) or a Linux client (using command line SSH). I'm fairly sure the problem is at the receiving end with NexentaStor. A blog article - CrashPlan for Backup on Nexenta - indicates that it could be made to work only after "after enabling TCP forwarding in Nexenta in /etc/ssh/sshd_config" - although I'm not sure how to go about that or specifically what I need to do. Things I have tried Ensuring the Crashplan server on the Nexenta box is listening on port 4243 $ netstat -na | grep LISTEN | grep 42 127.0.0.1.4243 *.* 0 0 131072 0 LISTEN *.4242 *.* 0 0 65928 0 LISTEN Establishing a tunnel from a Linux host: $ ssh -L 4200:localhost:4243 admin:10.0.0.56 and then, from another terminal on the Linux host, using telnet to verify the tunnel: $ telnet localhost 4200 Trying ::1... Connected to localhost. Escape character is #^]'. with nothing more, although the Crashplan server should respond with something. From Windows, using PuTTY have followed the instructions on the Crashplan support site to establish an equivalent tunnel, but then telnet on Windows gives me no response at all and the Crashplan GUI can't connect either. The PuTTY log for the tunnelled connection shows reasonable output: ... 2011-11-18 21:09:57 Opened channel for session 2011-11-18 21:09:57 Local port 4200 forwarding to localhost:4243 2011-11-18 21:09:57 Allocated pty (ospeed 38400bps, ispeed 38400bps) 2011-11-18 21:09:57 Started a shell/command 2011-11-18 21:10:09 Opening forwarded connection to localhost:4243 but the telnet localhost 4200 command from Windows does nothing at all - it just waits with a blank terminal. On the NexentaStor server I've examined the /etc/ssh/sshd_config file and everything seems 'normal' - and I've commented out the ListenAddress entries to ensure that I'm listening on all interfaces. How can I establish a tunnel, and how can I verify that it is working?

    Read the article

  • How to easily input and display isolated japanese radicals on MacOS X?

    - by ogerard
    (This question was considered as off-topic on Japanese.SE and more suitable for SuperUser). I like to write computer notes about what I learn in Japanese. From time to time, I would like to be able to include in my text a given radical, say kokoro ?, which takes several graphic forms when used as an element in a more complex kanji, for instance . I did not succeed on my system (Mac OS X 10.7) to find the glyphs for these variants conveniently and exactly as I would like them (I would also be interested about how to do this on Windows 7 or Linux). I first tried the name of the kanji from which the radical is derived. Then I tried to use the japanese name for them, such as risshinben (as in ?) and shitagokoro (as in ?), hoping that the hiragana or katakana input would recognize them and propose me their representation, but it did not work. So I looked into the Full Japanese Character Table, under the "by radical" tab, and found at least a version of each of them : ? (CJK 5FC4) and ? (CJK 38FA) with the correct kun readings. I have them now as favorites but do I need to do that for all radicals? Do I need to register all of them in a user dictionary? I would imagine that I am not the only one who wants to do use them. Besides, the versions I have found are not suited for all occasions: they are centered on a standard kanji square. If I want them to appear near to a placeholder, or demonstrate their proportion to the rest of a typical kanji, I have to make complicated adjustments, depending on my use and the kind of radical. More generally are there computer tools for japanese dictionary editors and japanese teachers I could use on Mac OS X? (I could not add relevant tags such as : japanese or ideogram, please feel free to edit)

    Read the article

  • centos postfix send email problem

    - by Catalin
    I have a big problem with postfix. I can receive mail in webmin and outlook but I can't send (only on local I can - user to user). Dovecot is working just fine. Sendmail is disable. Please help me. postfix -n postfix: invalid option -- n postfix: fatal: usage: postfix [-c config_dir] [-Dv] command [root@xprivatecams usr]# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man milter_default_action = acceptsmtpd_tls_auth_only = no milter_protocol = 2 mydestination = $myhostname, localhost.$mydomain, localhost myhostname = xprivatecams.com mynetworks = 94.177.41.0/24, 127.0.0.0/8 newaliases_path = /usr/bin/newaliases.postfix non_smtpd_milters = inet:localhost:20207 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_use_tls = yes smtpd_milters = inet:localhost:20207 smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550 Jan 18 00:46:17 xprivatecams postfix/postfix-script: starting the Postfix mail system Jan 18 00:46:17 xprivatecams postfix/master[15545]: daemon started -- version 2.3.3, configuration /etc/postfix Jan 18 00:48:00 xprivatecams postfix/pickup[15546]: EDE7EA8001B: uid=0 from=<[email protected]> Jan 18 00:48:00 xprivatecams postfix/cleanup[15817]: EDE7EA8001B: message-id=<[email protected]> Jan 18 00:48:00 xprivatecams opendkim[2776]: EDE7EA8001B: DKIM-Signature header added Jan 18 00:48:01 xprivatecams postfix/qmgr[15547]: EDE7EA8001B: from=<[email protected]>, size=615, nrcpt=1 (queue active) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: connect to mail.flabell.com[72.47.224.75]: Connection timed out (port 25) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: EDE7EA8001B: to=<[email protected]>, relay=none, delay=30, delays=0.08/0.03/30/0, dsn=4.4.1, status=deferred (connect to mail.flabell.com[72.47.224.75]: Connection timed out) telnet 94.177.41.70 25 Trying 94.177.41.70... Connected to xprivatecams.com (94.177.41.70). Escape character is '^]'. 220 xprivatecams.com ESMTP Postfix ehlo me 250-xprivatecams.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN

    Read the article

  • Chunking large rsync transfers?

    - by Gabe Martin-Dempesy
    We use rsync to update a mirror of our primary file server to an off-site colocated backup server. One of the issues we currently have is that our file server has 1TB of mostly smaller files (in the 10-100kb range), and when we're transferring this much data, we often end up with the connection being dropped several hours into the transfer. Rsync doesn't have a resume/retry feature that simply reconnects to the server to pickup where it left off -- you need to go through the file comparison process, which ends up being very length with the amount of files we have. The solution that's recommended to get around is to split up your large rsync transfer into a series of smaller transfers. I've figured the best way to do this is by first letter of the top-level directory names, which doesn't give us a perfectly even distribution, but is good enough. I'd like to confirm if my methodology for doing this is sane, or if there's a more simple way to accomplish the goal. To do this, I iterate through A-Z, a-z, 0-9 to pick a one character $prefix. Initially I was thinking of just running rsync -av --delete --delete-excluded --exclude "*.mp3" "src/$prefix*" dest/ (--exclude "*.mp3" is just an example, as we have a more lengthy exclude list for removing things like temporary files) The problem with this is that any top-level directories in dest/ that are no longer present present on src will not get picked up by --delete. To get around this, I'm instead trying the following: rsync \ --filter 'S /$prefix*' \ --filter 'R /$prefix*' \ --filter 'H /*' \ --filter 'P /*' \ -av --delete --delete-excluded --exclude "*.mp3" src/ dest/ I'm using the show and hide over include and exclude, because otherwise the --delete-excluded will delete anything that doesn't match $prefix. Is this the most effective way of splitting the rsync into smaller chunks? Is there a more effective tool, or a flag that I've missed, that might make this more simple?

    Read the article

  • mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect

    - by photon
    When I'm trying to install MySQL 5.5 community edition on my Ubuntu 10.04 by compiling the source code, I met the following problem: $ fg % 1 sudo ../bin/mysqld_safe --basedir=/usr/local/mysql_community_5.5/data --user=mysql --defaults-file=/etc/my.cnf [sudo] password for linnan: Sorry, try again. [sudo] password for linnan: 121023 09:26:21 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:21 mysqld_safe Logging to '/var/log/mysql/error.log'. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:22 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121023 09:26:23 mysqld_safe mysqld from pid file /var/lib/mysql/ubuntu.pid ended It seems that the problem is related to log configuration. I've noticed a bugfix related to this problem: http://bugs.mysql.com/bug.php?id=50083 But I still have no idea how to solve it. The relative content in /etc/my.cnf: [mysqld] port = 3306 socket = /tmp/mysql.sock skip-external-locking key_buffer_size = 384M max_allowed_packet = 1M table_open_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 character-set-server=utf8 [mysqld-safe] basedir=/usr/local/mysql_community_5.5 datadir=/usr/local/mysql_community_5.5/data mysqld_safe_syslog.cnf: /etc/mysql/conf.d/mysqld_safe_syslog.cnf: [mysqld_safe] syslog

    Read the article

  • centos postfix send email problem

    - by Catalin
    Hello. I have a big problem with postfix. I can receive mail in webmin and outlook but I can't send (only on local I can - user to user). Dovecot is working just fine. Sendmail is disable. Please help me. postfix -n postfix: invalid option -- n postfix: fatal: usage: postfix [-c config_dir] [-Dv] command [root@xprivatecams usr]# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man milter_default_action = acceptsmtpd_tls_auth_only = no milter_protocol = 2 mydestination = $myhostname, localhost.$mydomain, localhost myhostname = xprivatecams.com mynetworks = 94.177.41.0/24, 127.0.0.0/8 newaliases_path = /usr/bin/newaliases.postfix non_smtpd_milters = inet:localhost:20207 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_use_tls = yes smtpd_milters = inet:localhost:20207 smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550 Jan 18 00:46:17 xprivatecams postfix/postfix-script: starting the Postfix mail system Jan 18 00:46:17 xprivatecams postfix/master[15545]: daemon started -- version 2.3.3, configuration /etc/postfix Jan 18 00:48:00 xprivatecams postfix/pickup[15546]: EDE7EA8001B: uid=0 from=<[email protected]> Jan 18 00:48:00 xprivatecams postfix/cleanup[15817]: EDE7EA8001B: message-id=<[email protected]> Jan 18 00:48:00 xprivatecams opendkim[2776]: EDE7EA8001B: DKIM-Signature header added Jan 18 00:48:01 xprivatecams postfix/qmgr[15547]: EDE7EA8001B: from=<[email protected]>, size=615, nrcpt=1 (queue active) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: connect to mail.flabell.com[72.47.224.75]: Connection timed out (port 25) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: EDE7EA8001B: to=<[email protected]>, relay=none, delay=30, delays=0.08/0.03/30/0, dsn=4.4.1, status=deferred (connect to mail.flabell.com[72.47.224.75]: Connection timed out) telnet 94.177.41.70 25 Trying 94.177.41.70... Connected to xprivatecams.com (94.177.41.70). Escape character is '^]'. 220 xprivatecams.com ESMTP Postfix ehlo me 250-xprivatecams.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN

    Read the article

  • Creating fillable, saveable PDF in OpenOffice.org results in garbage

    - by Ubuntourist
    I've been creating beautiful fillable PDFs using OOo Writer under Ubuntu for a few years. However, I've recently been asked to make them saveable rather than just printable. So, I go to my colleague's Windows computer which has Adobe Acrobat Professional 8, and following directions outlined in Save filled form in PDF file in Ubuntu. I end up with an unreadable, unfillable document. Acrobat Reader opens it, but it's garbage. It looks like it might be a character encoding issue. The document was created using Arial under Ubuntu. I installed OOo on the Windows box and changed the font to Tahoma. But with either font, the resulting file is a jumble of boxes and oddly placed random characters. Given that it fails with a fairly ubiquitous font, and a Microsoft specific font, I'm guessing it's not a font issue. Until I enable the rights, the PDF is readable both with Acrobat Reader and Acrobat Pro. Anyone else encounter the problem? If so, did you solve it? Thanks.

    Read the article

  • Unzip individual files from multiple zip files and extract those individual files to home directory

    - by James P.
    I would like to unzip individual files. These files have a .txt extension. These files also live within multiple zipped files. Here is the command I'm trying to use. unzip -jn /path/to/zipped/files/zipArchiveFile2011\*.zip /path/to/specific/individual/files/myfiles2011*.txt -d /path/to/home/directory/for/extract/ From my understanding, the -j option excludes directories and will extract only the txt files The -n option will not overwrite a file if it has already been extracted. I've also learned that the forward slash in /path/to/zipped/files/zipArchiveFile2011\*.zip is necessary to escape the wildcard (*) character. Here is sample of error messages I'm coming accross: Archive: /path/to/zipped/files/zipArchiveFile20110808.zip caution: filename not matched: /path/to/specific/individual/files/myfiles20110807.txt caution: filename not matched: /path/to/specific/individual/files/myfiles20110808.txt Archive: /path/to/zipped/files/zipArchiveFile20110809.zip caution: filename not matched: /path/to/specific/individual/files/myfiles20110810.txt caution: filename not matched: /path/to/specific/individual/files/myfiles20110809.txt I feel that I'm missing something very simple. I've tried using single quotes (') and double quotes (") around directory paths. But no luck.

    Read the article

  • How to deal with DELL support system?

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). It is Dell India Support System. So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • How can I view the binary contents of a file natively in Windows 7? (Is it possible.)

    - by Shannon Severance
    I have a file, a little bigger than 500MB, that is causing some problems. I believe the issue is in the end of line (EOL) convention used. I would like to look at the file in its uninterpreted raw form (1) to confirm the EOL convention of the file. How can I view the "binary" of a file using something built in to Windows 7? I would prefer to avoid having to download anything additional. (1) My coworker and I opened the file in text editors, and they show the lines as one would expect. But both text editors will open files with different EOL conventions and interpret them automagically. (TextEdit and Emacs 24.2. For Emacs I had created a second file with just the first 4K bytes using head -c4096 on a linux box and opened that from my windows box. I attempted to use hexl-mode in Emacs, but when I went to hexl-mode and back to text-mode, the contents of the buffer had changed, adding a visible ^M to the end of each line, so I'm not trusting that at the moment. I believe the issue may be in the end of line character(s) used. The editors my coworker and I tried (1) just automagically recognized the end of line convention and showed us lines. And based on other evidence I believe the EOL convention is carriage return only. (2) return only. are able to recognize and To know what is actually in the file, I would like to look at the binary contents of the file, or at least a couple thousand bytes of the file, preferablely in Hex, though I could work with decimal or octal. Just ones an zeros would be pretty rough to look at.

    Read the article

  • Mounting NAS drive with cifs using credentials file through fstab does not work

    - by mahatmanich
    I can mount the drive in the following way, no problem there: mount -t cifs //nas/home /mnt/nas -o username=username,password=pass\!word,uid=1000,gid=100,rw,suid However if I try to mount it via fstab I get the following error: //nas/home /mnt/nas cifs iocharset=utf8,credentials=/home/username/.smbcredentials,uid=1000,gid=100 0 0 auto .smbcredentials file looks like this: username=username password=pass\!word Note the ! in my password ... which I am escaping in both instances I also made sure there are no eol in the file using :set noeol binary from Mount CIFS Credentials File has Special Character chmod on .credentials file is 0600 and chown is root:root file is under ~/ Why am I getting in on the one side and not with fstab?? I am running on ubuntu 12 LTE and mount.cifs -V gives me mount.cifs version: 5.1 Any help and suggestions would be appreciated ... UPDATE: /var/log/syslog shows following [26630.509396] Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE [26630.509407] CIFS VFS: Send error in SessSetup = -13 [26630.509528] CIFS VFS: cifs_mount failed w/return code = -13 UPDATE no 2 Debugging with strace mount through fstab: strace -f -e trace=mount mount -a Process 4984 attached Process 4983 suspended Process 4985 attached Process 4984 suspended Process 4984 resumed Process 4985 detached [pid 4984] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4984] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = -1 EACCES (Permission denied) mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Process 4983 resumed Process 4984 detached Mount through terminal strace -f -e trace=mount mount -t cifs //nas/home /mnt/nas -o username=user,password=pass\!wd,uid=1000,gid=100,rw,suid Process 4990 attached Process 4989 suspended Process 4991 attached Process 4990 suspended Process 4990 resumed Process 4991 detached [pid 4990] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4990] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = 0 Process 4989 resumed Process 4990 detached

    Read the article

  • Sql Server Management Studio: Change Prefix or Suffix characters

    - by PhantomDrummer
    I have an instance of SSMS 2008, for which the option to edit data in a table doesn't work. If I right-click on any table in the Object Explorer and select 'Edit top 200 rows' I get an error dialog 'Invalid prefix or suffix characters. (MS Visual Database Tools)'. The error seems to be associated specifically with SSMS, not with SQL Server (because this SSMS instance gives the same error no matter what database I connect to, but I've verified I can connect to some of the same databases using SSMS on other machines without the error). (However, our firewall prevents me using SSMS on other machines for some crucial tasks, so I do need to fix the problem). Googling for the error suggests that I should change the prefix, suffix or escape character, but without any indication of how you can make that change in SSMS. I'd also note that I'm not aware of having done any customization on SSMS since installing it, so would be surprised at having to make such a change now. Does anyone have any idea what the error message means or what I can do about it? Or how I can change the prefix/suffix/escape characters if that is really the problem.

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >