Search Results

Search found 31800 results on 1272 pages for 'nrf big show'.

Page 241/1272 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • An easily customizable linux distribution using minimal disk space?

    - by Frank
    I'm looking for a linux distribution that can be easily used to create my own distribution that's the same system with some software installed. So basically I should be able to create an iso which, when installed, will have the linux distribution with my desired installed. More specifically, I plan on installing mysql and a bit of my own software which shouldn't be too big. However, this distribution needs to be extremely small in terms of disk space. The distribution, including mysql should not exceed 100mb. It should, of course still be able to connect to the internet and perform other standard functions. I don't need X/any sort of window manager, and would prefer not to have it since it would increase disk usage. Currently I have tried ttylinux and tiny core linux. I've found that ttylinux, while is extremely small, has almost nothing so that mysql can't even be installed. Tiny core linux, on the other hand is a bit too big. I've found openembedded and linux from scratch, but I would prefer for the install and build process to be much easier. What other distribution would you recommend for my purposes? Minimizing disk usage is the most important, followed by ease of installing and creating the custom distribution.

    Read the article

  • AVCHD MTS h264 1080p file with choppy playback in Linux

    - by marc
    When I'm trying play video files from my camera: Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) -> 50.00 (50/1) Input #0, mpegts, from '00027.MTS': Duration: 00:00:38.88, start: 2.884289, bitrate: 16945 kb/s Program 1 Stream #0.0[0x1011]: Video: h264 (High), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 50 tbc Stream #0.1[0x1100]: Audio: ac3, 48000 Hz, stereo, s16, 256 kb/s … on my Linux computer (Ubuntu 12.04), I get choppy playback. It's completly unusable... I tried: Totem VLC mplayer The result is always same issue. I sent the same video file to a friend who has ubuntu 10.04 to test, and he also has the same issue. He has Windows 7, and confirms that on Windows, the video work well. I have an Intel® Core™2 CPU 6300 @ 1.86GHz × 2 with GF 9600 GT, with closed NVIDIA drivers. This is not any kind of issue with big files playing slow from an HDD issue. I have an SSD drive! I spent the last days and nights, trying hundreds of commands for ffmpeg, handbrake, mencoder... Any of them won't let me create a file with enough quality. I downloaded few movies from YouTube in 1080p, and playback worked well without any big pixels and choppiness. I would like have highest possible quality, I will put following files onto a Blu-ray disk so I don't need to compress them to get a smaller size. I just want smoth playback on my Linux box. On Windows, the same file is working well.

    Read the article

  • Reliable applicance for routing IT emergency calls (SIP and ISDN)

    - by chiborg
    We have a fairly big IT installation and our IT staff needs to be reachable 24/7. At the moment we have the following setup for "emergency" calls to our IT staff on our main Asterisk box: An incoming emergency number (connected via SIP trunk and a BRI card in case the SIP trunk goes down). When the number is called during the office hours, all the SIP phones of the IT staff are called simultaneously. When the number is called out of office hours interface, a list of mobile phone numbers is called, one after another until someone picks up. The list can be changed by the IT staff via command line script. The setup works well, but the Asterisk is heavily used in a call center, has experienced some outages and misconfigurations, each of them bringing down the IT emergency number. So we'd like to put the IT emergency call functionality on a separate device. This does not need to be a big server, it even does not need to be Asterisk, it only has one purpose and should do it reliably. It should be very low-maintenance. Any suggestions for hard- and software?

    Read the article

  • Clipboard bug in Wordpad in Windows 7 (accidentally pasting large file into application)

    - by frenchglen
    In Win7, I use Wordpad, and I really like it. For my needs it's lean and fast, yet has the formatting functionalities I'm after when working on my TXT/RTF files on a daily basis. I don't intend to change text editors. There's a really bad bug which has ALWAYS plagued me. If you have a large file contained in the clipboard, like a 238MB FLAC file, and you accidentally paste it into Wordpad for whatever reason - it hangs the application for a VERY long time (like 2 hours, it depends on how big the file is, because it tries to 'handle' it). You either have to close the application and lose any unsaved changes, or go do something else until the item has finished pasting into Wordpad (it actually eventually drops the file's icon in wordpad just like how it appears in Windows Explorer). It's a Windows bug, a Wordpad bug. Is there some solution for this? Or is the problem fixed in Windows 8 (if anyone can tell me)? .....I'm not going to try out Win8 myself, merely to answer this question - that's what I'm asking it on SuperUSer for! I'm really hoping it's one of those little-yet-big things that they've fixed in Win8 (like removing the 255-character file path limit in Explorer, which is awesome). Thank you for your help, if you have Win8 handy and can test this. :)

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • PEAP validating a secondary domain suffix

    - by sam
    Probably the title is a little bit confusing, let me explain the situation. Our company wants to implement a corporate wireless lan with PEAP authentication. unfortunately someone made a big mistake in our AD design 10 years ago. The domain name we are using "company.ch" is not owned by company but by someone else. so it is not possible to issue a public SSL certificate for the RADIUS server. Our AD is to big to rename it. We already thought about using our private PKI and rollout the CA certificate via GPO but that would only cover our corporate managed clients but not the BYOD (Smartphones, Tablets, Laptops..) Is there a way to add a secondary domain name like “company2.ch” and issue a public certificate and join that radius to that secondary domain aslwell, and configure that secondary dns suffix via DHCP for all the client pools... or is there another way with for example a new radius server which has his own domain company2.ch which is connected with some kind of trust between the company.ch doamin? sorry i'am not a client server guy.. hopefully you get my drift.!?

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • BIOS and Windows cannot detect CDROM device

    - by eman
    Hello! I have a HL-DT-ST RW/DVD GCC-4521B dvdrom device and a big problem. Some days ago everything worked fine. A friend installed some software and then the drives in winxp has been marked as corrupt. I uninstalled the software, but still corrupt drives. The next step I have done was running the current software GCC-4521B101(E).exe. When I ran this software again, the drives was automatically updated, but still marked as corrupt (in the Device Manager), even if I did a reboot. And then the big mistake: once more I tried to run this software, but during the update process, the machine restarted and boom! The DVDROM device doesn't work anymore. The led doesn't blink and if I push the eject button, nothing happens. Also bios and winxp doesn't recognize the optical drive. Then I plugged an other optical drive and it worked, but my old drive seems to be dead. So, what happened and how to solve this problem? Please help. Regards!

    Read the article

  • Mass-migrating from POP3 to Exchange 2010, how do I copy mailboxes?

    - by Erik P. Skaalerud
    I'm in the process of planning our migration from an internal hosted POP3-server (dovecot) to Exchange 2010. We're using Outlook 2003 for the moment, but will soon upgrade to Outlook 2010. The big problem is that we have about 50 computers here in our HQ, plus ~30 clients in branch offices (wich will get their Exchange migration later sometime). I'm the only IT personel, and having to go around and manually set up Outlook and copy over their PST contents is not a option I'm looking for. Some users have set outlook to keep messages for X number of days on the POP3 server, others have not. Using a POP3 connector to transfer over the mails is not a viable option. Here is what I've done so far: Created a transform for the Office 2003 administrative installation point Created a .PRF file to modify any existing e-mail account to switch over to Exchange (including the RPC-encrypt hotfix described in MSKB 2006508) Tested both transform and PRF, both works Created a test-OU and GPO containing the Office 2003 installation with transform applied, also works My big question is: How can I force Outlook to import any existing .PST into the new Exchange mailbox when the user starts up Outlook for the first time after the MST/PRF have been applied? Is this possible?

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • Can a USB/IDE/SATA adapter be flaky?

    - by Ward
    I use USB/IDE/SATA converters a lot and on the two that I have now, I sometimes get errors copying files to drives. It only happens when I'm copying big files to the drive (big can mean as little as 100MB, I think it happens more often with bigger files - 300MB or more), and basically the copy will fail and I'll get one or more error messages about "Delayed write failed." But if I disconnect the drive and re-connect it, I'll usually be able to continue. (The file that was being copied will be corrupt, but otherwise the drive is fine.) I just noticed a new type of flakiness: the data transfer rate can vary widely. I copied one set of files (5x300MB files) and it took 10+minutes, then I copied another set (approx. the same sizes) and it took less than a minute. I haven't done systematic testing, the other things I'm doing on my laptop at the same time might have some impact, and I haven't cross-checked the two adapters I have and the 3 hard drives I'm working with to see if there's a pattern. I'm more wondering if anyone else has seen anything like this.

    Read the article

  • Why is my rsync so slow compared to pure cp or even scp?

    - by nfm
    I'm transfering the files from Linux to Windows 7 via a mounted share (the share is mounted from Windows on Linux).. I'm copying lots of data (i.e. nearly a TB) from the old to the new machine within my LAN. I'm unfortunate enough already that I only have 100MBit. Naturally I blindly used rsync but already wondered after a day why it feels so slow. Enabling the progress meter showed my a transfer rate of about 2MBit/s . So I took a reasonable big file (800MB) and tracked the transfer timing: cp : 05:33 scp (*): 06:33 rsync : 21:51 *) scp via localhost to the same Linux machine directly onto the share; completely useless but provided a progress meter The tests were as simple as (cp|scp|rsync) <source> <destination> No special arguments except host/port for scp. I even tried the -W switch for rsync but cancelled after ten minutes. rsync is 3.0.3 running on Lenny. To be able to interrupt the copy process anytime and resume lead me to rsync, but now I think I seriously need to reconsider this requirement. How's such a big difference possible?

    Read the article

  • What kind of hosting do I need? [closed]

    - by Robert Smith
    I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have custom made forum (rather than a built-in forum like the ones you can find as plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. Thanks a lot.

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • How to find what files / directories are not copied yet?

    - by user8676
    Hi all, I found the following 'nice' situation: An archive of few disks (actually three disks) which has a bunch of photos (more or less) organized. Well, this is good. A big disk shared on a network which has a bunch of photos which has another folder structure (even if is somewhat recognizable for a human being) than the archive described above, but some of the files on this big network share are the same with the files from the archive. Well, this is bad. What we need is to move the different (new) files from the network share in the archive (perhaps we'll use for this a new disk added to archive). The program that we need is different from a regular File Duplicate Finder program because usually the File Duplicate Finder finds the duplicates from all sources comparing each file with another. We want to find the differences between the two sources. It is fine for us to have a report generated in text file which after this we'll use to do our move. A Windows solution will be preferred. Any ideas? TIA

    Read the article

  • Difference between all servers in one cluster and more than one cluster with servers?

    - by silla
    Not sure I understand what´s the difference or how it works when servers a running in one cluster or if there are more than one clusters with servers in it - regard High availability & Load Balancing. For me they are somehow the same, there is not really a big difference. Let´s make a simple example: 2 Servers in 1 Cluster 2 Clusters with each 1 Server - 1. If one Server failure, the other one is able to continue the work. The same for Load Balancing, these two Servers are able to balance the work together. - 2. The same thing! If one Server failure... The only thing that could be a problem with point 1. is if the Cluster fails (then both of the Server are dead). But is this even possible? I was reading stuff about clustering and high availability but I think I do not get this really. Probably I did not really understand how a cluster is working. Are these 2 points with 1 Cluster and 2 Clusters somehow the same or are there really some big differences? What should I know about it? Thank you

    Read the article

  • Ubuntu server 10.04 disconnects after short periods of inactivity on my site

    - by user57019
    I'm new to Ubuntu (installed it for the first time just a couple of days ago on my server). I've Ubuntu Server 10.04 and am just using the terminal, no GUI like Gnome. So far it's working pretty great except for one big thing. Whenever I go to sleep and there's no activity on my server (it's not a big site so active users drop to 0 during the night), the server kind of disconnects. The only thing that can bring the site back online is to restart the whole server. I've tried disabling powersaving by using setterm but that changes nothing. Even if I wake up the server by pressing any key or so the site wont go back online! I've tried just restarting both Apache and MySQL (I'm using LAMP-server btw) but not even that works. But as soon as I turn the power off and on at the server, everythings work like normal for a couple of minutes of inactivity (~5-15 minutes I'd guess) and then it's down again unless someone logs in to the site and is active. I was previously using XAMPP on my laptop with Windows XP and that worked 24/7 so I don't think it's anything with my router or ISP. This is driving me crazy! My site is down all the time I'm in school as I have no possibility to restart the server if it becomes offline. Does anyone have a clue to what could be wrong?

    Read the article

  • Ubuntu server 10.04 disconnects after short periods of inactivity on my site

    - by Melot
    Hi! I'm new to Ubuntu (installed it for the first time just a couple of days ago on my server). I've Ubuntu Server 10.04 and am just using the terminal, no GUI like Gnome. So far it's working pretty great except for one big thing. Whenever I go to sleep and there's no activity on my server (it's not a big site so active users drop to 0 during the night), the server kind of disconnects. The only thing that can bring the site back online is to restart the whole server. I've tried disabling powersaving by using setterm but that changes nothing. Even if I wake up the server by pressing any key or so the site wont go back online! I've tried just restarting both Apache and MySQL (I'm using LAMP-server btw) but not even that works. But as soon as I turn the power off and on at the server, everythings work like normal for a couple of minutes of inactivity (~5-15 minutes I'd guess) and then it's down again unless someone logs in to the site and is active. I was previously using XAMPP on my laptop with Windows XP and that worked 24/7 so I don't think it's anything with my router or ISP. This is driving me crazy! My site is down all the time I'm in school as I have no possibility to restart the server if it becomes offline. Does anyone have a clue to what could be wrong?

    Read the article

  • Windows 7, network transmit (send) not working

    - by user326287
    My Win 7 works 2 years without problem. But now, I can't transmit (send) big data on LAN/Internet. I can: - Ping anything - Browse Internet, download files at full speed - Send e-mails with very small attachments. - Testing download speed on Speedtest.net, measure stable full speed. I can't: - Testing upload speed on Speedtest.net. Upload stuck.. - Save/send email messages with big (128k) attachment, independent from e-mail provider or e-mail box. THIS IS NOT A HARDWARE/CABLE/CARD OR OTHER NETWORK DEVICES PROBLEM! When I boot from a Linux Live CD, without ANY hardware change, all data sending, testing works correctly, at full speed. I have tried already in Win 7: - Disable Windows/3rd party Firewall completely - Reset IP stack parameters (netsh int ip reset c:\resetlog.txt) - Computer restore - Reinstall LAN driver When I inspect the packets in Wireshark in Windows, I see lot's of (maybe 60% of sent packets) "TCP Retransmission". Sometimes receive "TCP Dup Ack" or "TCP Out-of-Order". Linux don't do this. Thank you for the help.

    Read the article

  • Jabber client for windows 7

    - by Anders
    I am looking for a jabber client with some specific functions. I have spent 1½ day looking for one and it is getting tiresome. Clients that I have been using, that have what I need, but I am not interesting for a reason in are: Pidgin, does not show complete messages in their popups. Miranda IM, I have a constant disconnect issue that does not seem to be resolved in my case. What I need are: Popups A popup that shows broadcasts to users. A popup that show when my username is typed in a conference chat. I need to be able to view the full message in the popup. No configuration of a theme to make this enabled, or if there is a working theme for it already. Preferable placement is on the top right of the screen. Able to 'popup' when running full screen applications, much like games. Conferences Easy access to bookmarked conferences. I do not want to go through submenus to rejoin a disconnected or closed conference. If I close the conference window I want to be connected to the conference until I exit the client. Tabbed interface. Configuration Sober configurations, options are great but there is a limit and the above needs to be availble in the options in a understandable manner. What I wish for: MSN Not needed! If it is avaible then it is a big plus. Facebook Not needed! If it is avaible then it is a big plus. Conferences/chats Not needed! Eyecandy is always nice.

    Read the article

  • Deploying Memcached as 32bit or 64bit?

    - by rlotun
    I'm curious about how people deploy memcached on 64 bit machines. Do you compile a 64bit (standard) memcached binary and run that, or do people compile it in 32bit mode and run N instances (where N = machine_RAM / 4GB)? Consider a recommended deployment of Redis (from the Redis FAQ): Redis uses a lot more memory when compiled for 64 bit target, especially if the dataset is composed of many small keys and values. Such a database will, for instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB for 64 bit! That's a big difference. You can run 32 bit Redis binaries in a 64 bit Linux and Mac OS X system without problems. For OS X just use make 32bit. For Linux instead, make sure you have libc6-dev-i386 installed, then use make 32bit if you are using the latest Git version. Instead for Redis <= 1.2.2 you have to edit the Makefile and replace "-arch i386" with "-m32". If your application is already able to perform application-level sharding, it is very advisable to run N instances of Redis 32bit against a big 64 bit Redis box (with more than 4GB of RAM) instead than a single 64 bit instance, as this is much more memory efficient. Would not the same recommendation also apply to a memcached cluster?

    Read the article

  • Get data from MySQL to Android application

    - by Mona
    I want to get data from MySQL database using PHP and display it in Android activity. I code it and pass JSON Array but there is a problem i dont know how to connect to server and my all database is on local server. I code it Kindly tell me where i go wrong so I can get exact results. I'll be very thankful to you. My PHP code is: <?php $response = array(); require_once __DIR__ . '/db_connect.php'; $db = new DB_CONNECT(); if (isset($_GET["cid"])) { $cid = $_GET['cid']; // get a product from products table $result = mysql_query("SELECT *FROM my_task WHERE cid = $cid"); if (!empty($result)) { // check for empty result if (mysql_num_rows($result) > 0) { $result = mysql_fetch_array($result); $task = array(); $task["cid"] = $result["cid"]; $task["cus_name"] = $result["cus_name"]; $task["contact_number"] = $result["contact_number"]; $task["ticket_no"] = $result["ticket_no"]; $task["task_detail"] = $result["task_detail"]; // success $response["success"] = 1; // user node $response["task"] = array(); array_push($response["my_task"], $task); // echoing JSON response echo json_encode($response); } else { // no task found $response["success"] = 0; $response["message"] = "No product found"; // echo no users JSON echo json_encode($response); } } else { // no task found $response["success"] = 0; $response["message"] = "No product found"; echo json_encode($response); } } else { $response["success"] = 0; $response["message"] = "Required field(s) is missing"; // echoing JSON response echo json_encode($response);} ?> My Android code is: public class My_Task extends Activity { TextView cus_name_txt, contact_no_txt, ticket_no_txt, task_detail_txt; EditText attend_by_txtbx, cus_name_txtbx, contact_no_txtbx, ticket_no_txtbx, task_detail_txtbx; Button btnSave; Button btnDelete; String cid; // Progress Dialog private ProgressDialog tDialog; // Creating JSON Parser object JSONParser jParser = new JSONParser(); ArrayList<HashMap<String, String>> my_taskList; // single task url private static final String url_read_mytask = "http://198.168.0.29/mobile/read_My_Task.php"; // url to update product private static final String url_update_mytask = "http://198.168.0.29/mobile/update_mytask.php"; // url to delete product private static final String url_delete_mytask = "http://198.168.0.29/mobile/delete_mytask.php"; // JSON Node names private static String TAG_SUCCESS = "success"; private static String TAG_MYTASK = "my_task"; private static String TAG_CID = "cid"; private static String TAG_NAME = "cus_name"; private static String TAG_CONTACT = "contact_number"; private static String TAG_TICKET = "ticket_no"; private static String TAG_TASKDETAIL = "task_detail"; private static String attend_by_txt; // task JSONArray JSONArray my_task = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.my_task); cus_name_txt = (TextView) findViewById(R.id.cus_name_txt); contact_no_txt = (TextView)findViewById(R.id.contact_no_txt); ticket_no_txt = (TextView)findViewById(R.id.ticket_no_txt); task_detail_txt = (TextView)findViewById(R.id.task_detail_txt); attend_by_txtbx = (EditText)findViewById(R.id.attend_by_txt); attend_by_txtbx.setText(My_Task.attend_by_txt); Spinner severity = (Spinner) findViewById(R.id.severity_spinner); // Create an ArrayAdapter using the string array and a default spinner layout ArrayAdapter<CharSequence> adapter3 = ArrayAdapter.createFromResource(this, R.array.Severity_array, android.R.layout.simple_dropdown_item_1line); // Specify the layout to use when the list of choices appears adapter3.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); // Apply the adapter to the spinner severity.setAdapter(adapter3); // save button btnSave = (Button) findViewById(R.id.btnSave); btnDelete = (Button) findViewById(R.id.btnDelete); // getting product details from intent Intent i = getIntent(); // getting product id (pid) from intent cid = i.getStringExtra(TAG_CID); // Getting complete product details in background thread new GetProductDetails().execute(); // save button click event btnSave.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View arg0) { // starting background task to update product new SaveProductDetails().execute(); } }); // Delete button click event btnDelete.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View arg0) { // deleting product in background thread new DeleteProduct().execute(); } }); } /** * Background Async Task to Get complete product details * */ class GetProductDetails extends AsyncTask<String, String, String> { /** * Before starting background thread Show Progress Dialog * */ @Override protected void onPreExecute() { super.onPreExecute(); tDialog = new ProgressDialog(My_Task.this); tDialog.setMessage("Loading task details. Please wait..."); tDialog.setIndeterminate(false); tDialog.setCancelable(true); tDialog.show(); } /** * Getting product details in background thread * */ protected String doInBackground(String... params) { // updating UI from Background Thread runOnUiThread(new Runnable() { public void run() { // Check for success tag int success; try { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("cid", cid)); // getting product details by making HTTP request // Note that product details url will use GET request JSONObject json = JSONParser.makeHttpRequest( url_read_mytask, "GET", params); // check your log for json response Log.d("Single Task Details", json.toString()); // json success tag success = json.getInt(TAG_SUCCESS); if (success == 1) { // successfully received product details JSONArray my_taskObj = json .getJSONArray(TAG_MYTASK); // JSON Array // get first product object from JSON Array JSONObject my_task = my_taskObj.getJSONObject(0); // task with this cid found // Edit Text // display task data in EditText cus_name_txtbx = (EditText) findViewById(R.id.cus_name_txt); cus_name_txtbx.setText(my_task.getString(TAG_NAME)); contact_no_txtbx = (EditText) findViewById(R.id.contact_no_txt); contact_no_txtbx.setText(my_task.getString(TAG_CONTACT)); ticket_no_txtbx = (EditText) findViewById(R.id.ticket_no_txt); ticket_no_txtbx.setText(my_task.getString(TAG_TICKET)); task_detail_txtbx = (EditText) findViewById(R.id.task_detail_txt); task_detail_txtbx.setText(my_task.getString(TAG_TASKDETAIL)); } else { // task with cid not found } } catch (JSONException e) { e.printStackTrace(); } } }); return null; } /** * After completing background task Dismiss the progress dialog * **/ protected void onPostExecute(String file_url) { // dismiss the dialog once got all details tDialog.dismiss(); } } /** * Background Async Task to Save product Details * */ class SaveProductDetails extends AsyncTask<String, String, String> { /** * Before starting background thread Show Progress Dialog * */ @Override protected void onPreExecute() { super.onPreExecute(); tDialog = new ProgressDialog(My_Task.this); tDialog.setMessage("Saving task ..."); tDialog.setIndeterminate(false); tDialog.setCancelable(true); tDialog.show(); } /** * Saving product * */ protected String doInBackground(String... args) { // getting updated data from EditTexts String cus_name = cus_name_txt.getText().toString(); String contact_no = contact_no_txt.getText().toString(); String ticket_no = ticket_no_txt.getText().toString(); String task_detail = task_detail_txt.getText().toString(); // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair(TAG_CID, cid)); params.add(new BasicNameValuePair(TAG_NAME, cus_name)); params.add(new BasicNameValuePair(TAG_CONTACT, contact_no)); params.add(new BasicNameValuePair(TAG_TICKET, ticket_no)); params.add(new BasicNameValuePair(TAG_TASKDETAIL, task_detail)); // sending modified data through http request // Notice that update product url accepts POST method JSONObject json = JSONParser.makeHttpRequest(url_update_mytask, "POST", params); // check json success tag try { int success = json.getInt(TAG_SUCCESS); if (success == 1) { // successfully updated Intent i = getIntent(); // send result code 100 to notify about product update setResult(100, i); finish(); } else { // failed to update product } } catch (JSONException e) { e.printStackTrace(); } return null; } /** * After completing background task Dismiss the progress dialog * **/ protected void onPostExecute(String file_url) { // dismiss the dialog once product uupdated tDialog.dismiss(); } } /***************************************************************** * Background Async Task to Delete Product * */ class DeleteProduct extends AsyncTask<String, String, String> { /** * Before starting background thread Show Progress Dialog * */ @Override protected void onPreExecute() { super.onPreExecute(); tDialog = new ProgressDialog(My_Task.this); tDialog.setMessage("Deleting Product..."); tDialog.setIndeterminate(false); tDialog.setCancelable(true); tDialog.show(); } /** * Deleting product * */ protected String doInBackground(String... args) { // Check for success tag int success; try { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("cid", cid)); // getting product details by making HTTP request JSONObject json = JSONParser.makeHttpRequest( url_delete_mytask, "POST", params); // check your log for json response Log.d("Delete Task", json.toString()); // json success tag success = json.getInt(TAG_SUCCESS); if (success == 1) { // product successfully deleted // notify previous activity by sending code 100 Intent i = getIntent(); // send result code 100 to notify about product deletion setResult(100, i); finish(); } } catch (JSONException e) { e.printStackTrace(); } return null; } /** * After completing background task Dismiss the progress dialog * **/ protected void onPostExecute(String file_url) { // dismiss the dialog once product deleted tDialog.dismiss(); } } public void onItemSelected(AdapterView<?> parent, View view, int pos, long id) { // An item was selected. You can retrieve the selected item using // parent.getItemAtPosition(pos) } public void onNothingSelected(AdapterView<?> parent) { // Another interface callback } } My JSONParser code is: public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } // function get json from url // by making HTTP POST or GET mehtod public static JSONObject makeHttpRequest(String url, String method, List<NameValuePair> params) { // Making HTTP request try { // check for request method if(method == "POST"){ // request method is POST // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); httpPost.setEntity(new UrlEncodedFormEntity(params)); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); }else if(method == "GET"){ // request method is GET DefaultHttpClient httpClient = new DefaultHttpClient(); String paramString = URLEncodedUtils.format(params, "utf-8"); url += "?" + paramString; HttpGet httpGet = new HttpGet(url); HttpResponse httpResponse = httpClient.execute(httpGet); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; my all database is in localhost and it is not opening an activity. displays an error "Stopped unexpectedly":( How can i get exact results. Kindly guide me

    Read the article

  • Parallelism in .NET – Part 12, More on Task Decomposition

    - by Reed
    Many tasks can be decomposed using a Data Decomposition approach, but often, this is not appropriate.  Frequently, decomposing the problem into distinctive tasks that must be performed is a more natural abstraction. However, as I mentioned in Part 1, Task Decomposition tends to be a bit more difficult than data decomposition, and can require a bit more effort.  Before we being parallelizing our algorithm based on the tasks being performed, we need to decompose our problem, and take special care of certain considerations such as ordering and grouping of tasks. Up to this point in this series, I’ve focused on parallelization techniques which are most appropriate when a problem space can be decomposed by data.  Using PLINQ and the Parallel class, I’ve shown how problem spaces where there is a collection of data, and each element needs to be processed, can potentially be parallelized. However, there are many other routines where this is not appropriate.  Often, instead of working on a collection of data, there is a single piece of data which must be processed using an algorithm or series of algorithms.  Here, there is no collection of data, but there may still be opportunities for parallelism. As I mentioned before, in cases like this, the approach is to look at your overall routine, and decompose your problem space based on tasks.  The idea here is to look for discrete “tasks,” individual pieces of work which can be conceptually thought of as a single operation. Let’s revisit the example I used in Part 1, an application startup path.  Say we want our program, at startup, to do a bunch of individual actions, or “tasks”.  The following is our list of duties we must perform right at startup: Display a splash screen Request a license from our license manager Check for an update to the software from our web server If an update is available, download it Setup our menu structure based on our current license Open and display our main, welcome Window Hide the splash screen The first step in Task Decomposition is breaking up the problem space into discrete tasks. This, naturally, can be abstracted as seven discrete tasks.  In the serial version of our program, if we were to diagram this, the general process would appear as: These tasks, obviously, provide some opportunities for parallelism.  Before we can parallelize this routine, we need to analyze these tasks, and find any dependencies between tasks.  In this case, our dependencies include: The splash screen must be displayed first, and as quickly as possible. We can’t download an update before we see whether one exists. Our menu structure depends on our license, so we must check for the license before setting up the menus. Since our welcome screen will notify the user of an update, we can’t show it until we’ve downloaded the update. Since our welcome screen includes menus that are customized based off the licensing, we can’t display it until we’ve received a license. We can’t hide the splash until our welcome screen is displayed. By listing our dependencies, we start to see the natural ordering that must occur for the tasks to be processed correctly. The second step in Task Decomposition is determining the dependencies between tasks, and ordering tasks based on their dependencies. Looking at these tasks, and looking at all the dependencies, we quickly see that even a simple decomposition such as this one can get quite complicated.  In order to simplify the problem of defining the dependencies, it’s often a useful practice to group our tasks into larger, discrete tasks.  The goal when grouping tasks is that you want to make each task “group” have as few dependencies as possible to other tasks or groups, and then work out the dependencies within that group.  Typically, this works best when any external dependency is based on the “last” task within the group when it’s ordered, although that is not a firm requirement.  This process is often called Grouping Tasks.  In our case, we can easily group together tasks, effectively turning this into four discrete task groups: 1. Show our splash screen – This needs to be left as its own task.  First, multiple things depend on this task, mainly because we want this to start before any other action, and start as quickly as possible. 2. Check for Update and Download the Update if it Exists - These two tasks logically group together.  We know we only download an update if the update exists, so that naturally follows.  This task has one dependency as an input, and other tasks only rely on the final task within this group. 3. Request a License, and then Setup the Menus – Here, we can group these two tasks together.  Although we mentioned that our welcome screen depends on the license returned, it also depends on setting up the menu, which is the final task here.  Setting up our menus cannot happen until after our license is requested.  By grouping these together, we further reduce our problem space. 4. Display welcome and hide splash - Finally, we can display our welcome window and hide our splash screen.  This task group depends on all three previous task groups – it cannot happen until all three of the previous groups have completed. By grouping the tasks together, we reduce our problem space, and can naturally see a pattern for how this process can be parallelized.  The diagram below shows one approach: The orange boxes show each task group, with each task represented within.  We can, now, effectively take these tasks, and run a large portion of this process in parallel, including the portions which may be the most time consuming.  We’ve now created two parallel paths which our process execution can follow, hopefully speeding up the application startup time dramatically. The main point to remember here is that, when decomposing your problem space by tasks, you need to: Define each discrete action as an individual Task Discover dependencies between your tasks Group tasks based on their dependencies Order the tasks and groups of tasks

    Read the article

  • How to Find Office 2003 Commands in Office 2010

    - by Matthew Guay
    Are you new to the ribbon interface in Office 2010?  Here’s how you can get up to speed and learn where everything is quickly and easily. Microsoft has made an interactive guide to Office 2010’s new interface to help users learn their way around the new version.  If you’ve already used Office 2007, then Office 2010 will be very easy to transition to, but if you’re still using Office 2003 you may find the learning curve more steep.  With this interactive guide, upgrading your Office skills doesn’t have to be hard. Learn Your Way Around the Office Ribbon Open the Office 2010 interactive guides site (link below) in your browser, and select the Office app you want to explore. The guides are powered by Silverlight, so if you don’t already have it installed you will be prompted to do so. Once the guide has loaded, click Start to begin. Select any menu or toolbar item in the Office 2003 mockup.  A tooltip will appear to show you how to find this option in Word 2010. If you click the item, the interface will switch to an Office 2010 mockup and will interactively show you how to access this feature.  The Thumbnails view isn’t available by default in Word 2010, so it shows us how to add it to the ribbon.  When you’ve figured this command out, click anywhere to go back to the Office 2003 mockup and find another item. Currently the guides are available for Word, Excel, and PowerPoint, but the site says that guides for the other Office apps will be available soon.  Here’s the PowerPoint guide showing where the Rehearse Timings option is in PowerPoint 2010. Install the Interactive Guides to Your Computer You can also install the guides to your computer so you can easily access them even if you’re not online.  Open the guide you want to install, and click the Install button in the top right corner of the guide. Choose where you want the shortcuts, and click Ok. Here’s the Interactive Word 2010 guide installed on our computer.  The downloaded version seemed to work faster in our tests, likely because all the content was already saved to the computer.  If you decide you don’t need it any more, click Uninstall in the top right corner. Download Office Cheat Sheets If you’d like a cheat-sheet of Office commands that have changed or are new in Office 2010, Microsoft’s got that for you, too.  You can download Office reference workbooks (link below) that show how to access each item that was in Office 2003’s menus.  Here’s the Word guide showing where each of Word 2003’s commands from the help menu are in Word 2010. Learn Your Way Around Office 2007, Too! Microsoft offers similar interactive guides for learning the ribbon in Office 2007, so if you’re still using Office 2007 but can’t find a command, feel free to check it out as well (link below).  Guides are available for Word, Excel, PowerPoint, Access, and Outlook 2007.  You can also download cheat sheets for Office 2007 at this site as well.  Here’s the tutorial showing us where the font options are in PowerPoint 2007. Conclusion We have found the ribbon interface to be a great addition to Office, but if you’ve got years of Office 2003 experience under your belt you may find it difficult to locate your favorite commands.  These tutorials can help you use your old Office knowledge to learn Office 2010 or 2007 in a quick and easy way! Links Office 2010 interactive guide Download Office 2010 reference workbooks Office 2007 interactive guide Similar Articles Productive Geek Tips How To Find Commands and Functions in Office 2007 the Easy WayMake Excel 2007 Always Save in Excel 2003 FormatMake Word 2007 Always Save in Word 2003 FormatAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteCreate a Customized Tab on the Office 2010 Ribbon TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >