Search Results

Search found 9899 results on 396 pages for 'random person'.

Page 380/396 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • window.focus() not working in Google Chrome

    - by nickyt
    Hey folks, Just wondering if Google Chrome is going to support window.focus() at some point. When I mean support, I mean have it work. The call to it doesn't fail, it just doesn't do anything. All other major browsers do not have this problem: FireFox, IE6-IE8 and Safari. I have a client-side class for managing browser windows. When I first create a window the window comes into focus, but subsequent attempts to bring focus to the window do not work. From what I can tell, this appears to be a security feature to avoid annoying pop-ups and it does not appear to be a WebKit issue as it works in Safari. I know one idea someone brought forward was to close the window then reopen it, but this is a horrible solution. Googling shows that I do not appear to be the only person frustrated with this. And just to be 100% clear, I mean new windows, not tabs (tabs cannot be focused from what I've read) and all the windows being opened are in the same domain. Any ideas? MyCompany = { UI: {} }; // Put this here if you want to test the code. I create these namespaces elsewhere in code. MyCompany.UI.Window = new function() { // Private fields var that = this; var windowHandles = {}; // Public Members this.windowExists = function(windowTarget) { return windowTarget && windowHandles[windowTarget] && !windowHandles[windowTarget].closed; } this.open = function(url, windowTarget, windowProperties) { // See if we have a window handle and if it's closed or not. if (that.windowExists(windowTarget)) { // We still have our window object so let's check if the URLs is the same as the one we're trying to load. var currentLocation = windowHandles[windowTarget].location; if ( ( /^http(?:s?):/.test(url) && currentLocation.href !== url ) || ( // This check is required because the URL might be the same, but absolute, // e.g. /Default.aspx ... instead of http://localhost/Default.aspx ... !/^http(?:s?):/.test(url) && (currentLocation.pathname + currentLocation.search + currentLocation.hash) !== url ) ) { // Not the same URL, so load the new one. windowHandles[windowTarget].location = url; } // Give focus to the window. This works in IE 6/7/8, FireFox, Safari but not Chrome. // Well in Chrome it works the first time, but subsequent focus attempts fail,. I believe this is a security feature in Chrome to avoid annoying popups. windowHandles[windowTarget].focus(); } else { // Need to do this so that tabbed browsers (pretty much all browsers except IE6) actually open a new window // as opposed to a tab. By specifying at least one window property, we're guaranteed to have a new window created instead // of a tab. windowProperties = windowProperties || 'menubar=yes,location=yes,width=700, height=400, scrollbars=yes, resizable= yes'; windowTarget = windowTarget || "_blank"; // Create a new window. var windowHandle = windowProperties ? window.open(url, windowTarget, windowProperties) : window.open(url, windowTarget); if (null === windowHandle) { alert("You have a popup blocker enabled. Please allow popups for " + location.protocol + "//" + location.host); } else { if ("_blank" !== windowTarget) { // Store the window handle for reuse if a handle was specified. windowHandles[windowTarget] = windowHandle; windowHandles[windowTarget].focus(); } } } } }

    Read the article

  • MySQL InnoDB Corruption after power outage, possible to recover?

    - by Tim Hackett
    Hey Guys, I recently started trying to get Redmine up and running after a power outage that seems to have corrupted our InnoDB database in MySQL. Redmine had an extensive set of documentation that I would like to get even if redmine isn't able to run. The service fails on startup. I have tried inserting innodb_force_recovery = 4 per the documentation from the url in the error log. (also tried 1 thru 6 as I have backed up all directories after the corruption) I have verified through "mysqld-nt --print-defaults" that it is starting with the recovery option in the params. The machine is running Windows Server 2003 SP2, Xeon E5335 with 2GB RAM, MySQL is not mirrored to another machine, nor is the machine a mirror. I do not have any backups because the previous person did not set them up. Here is the error log: InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100308 14:50:01 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100308 14:50:02 InnoDB: Error: page 7 log sequence number 0 935521175 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 2 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 11 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 5 log sequence number 0 972973045 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 6 log sequence number 0 972984051 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 1577 log sequence number 0 972737368 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. InnoDB: Error: trying to access page number 4294965119 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 100308 14:50:02InnoDB: Assertion failure in thread 960 in file .\fil\fil0fil.c line 3959 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: about forcing recovery. 100308 14:50:02 [ERROR] mysqld-nt: Got signal 11. Aborting! 100308 14:50:02 [ERROR] Aborting 100308 14:50:02 [Note] mysqld-nt: Shutdown complete

    Read the article

  • building median for each string in IList<IDictionary<string, double>>

    - by Oliver
    Actually i have a little brain bender and and just can't find the right direction to get it to work: Given is an IList<IDictionary<string, double>> and it is filled as followed: Name|Value ----+----- "x" | 3.8 "y" | 4.2 "z" | 1.5 ----+----- "x" | 7.2 "y" | 2.9 "z" | 1.3 ----+----- ... | ... To fill this up with some random data i used the following methods: var list = CreateRandomPoints(new string[] { "x", "y", "z" }, 20); This will work as followed: private IList<IDictionary<string, double>> CreateRandomPoints(string[] variableNames, int count) { var list = new List<IDictionary<string, double>>(count); list.AddRange(CreateRandomPoints(variableNames).Take(count)); return list; } private IEnumerable<IDictionary<string, double>> CreateRandomPoints(string[] variableNames) { while (true) yield return CreateRandomLine(variableNames); } private IDictionary<string, double> CreateRandomLine(string[] variableNames) { var dict = new Dictionary<string, double>(variableNames.Length); foreach (var variable in variableNames) { dict.Add(variable, _Rand.NextDouble() * 10); } return dict; } Also i can say that it is already ensured that every Dictionary within the list contains the same keys (but from list to list the names and count of the keys can change). So that's what i got. Now to the things i need: I'd like to get the median (or any other math aggregate operation) of each Key within all the dictionaries, so that my function to call would look something like: IDictionary<string, double> GetMedianOfRows(this IList<IDictionary<string, double>> list) The best would be to give some kind of aggregate operation as a parameter to the function to make it more generic (don't know if the func has the correct parameters, but should imagine what i'd like to do): private IDictionary<string, double> Aggregate(this IList<IDictionary<string, double>> list, Func<IEnumerable<double>, double> aggregator) Also my actual biggest problem is to do the job with a single iteration over the list, cause if within the list are 20 variables with 1000 values i don't like to iterate 20 times over the list. Instead i would go one time over the list and compute all twenty variables at once (The biggest advantage of doing it that way would be to use this also as IEnumerable<T> on any part of the list in a later step). So here is the code i already got: public static IDictionary<string, double> GetMedianOfRows(this IList<IDictionary<string, double>> list) { //Check of parameters is left out! //Get first item for initialization of result dictionary var firstItem = list[0]; //Create result dictionary and fill in variable names var dict = new Dictionary<string, double>(firstItem.Count); //Iterate over the whole list foreach (IDictionary<string, double> row in list) { //Iterate over each key/value pair within the list foreach (var kvp in row) { //How to determine median of all values? } } return dict; } Just to be sure here is a little explanation about the Median.

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • MySQL is hogging my server resources

    - by Reacen
    Does anyone have any idea of what can cause this weird behaviour and how I go about fixing it? This is all coming from MySQL only (both RAM and CPU usage), for about 10 minutes after I reboot my Java game server (that has a pool of 256 connections). There are not that many queries and I think it may be more of a MySQL misconfiguration problem. My server: 3.20 GHz * 6 core / 24 GB RAM / 64 bit Windows Server 2003. My game server: Java server, with 256 MySQL connections pool (MyISAM engine), about 500,000 accounts, and 9 million rows of game items in database and about 3,000 players are connected. After about 15 minutes of the game server reboot, the server resumes its stability and CPU usage drop down to 1% ~ 5% and memory to 6 GB. Here is a copy of my MySQL configuration. Also, any advice about my MySQL configuration will be appreciated. I really set it up almost at random. # Example MySQL config file for very large systems. # # This is for a large system with memory of 1G-2G where the system runs mainly # MySQL. # # You can copy this file to # /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is C:\mysql\data) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /tmp/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] #log=c:\mysql.log port = 3306 socket = /tmp/mysql.sock skip-locking key_buffer_size = 2572M max_allowed_packet = 64M table_open_cache = 512 sort_buffer_size = 128M read_buffer_size = 128M read_rnd_buffer_size = 128M myisam_sort_buffer_size = 500M thread_cache_size = 32 query_cache_size = 1948M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 12 max_connections = 5000 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # # binary logging format - mixed recommended #binlog_format=mixed # Point the following paths to different dedicated disks #tmpdir = /tmp/ #log-update = /path-to-dedicated-directory/hostname # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = C:\mysql\data/ #innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend #innodb_log_group_home_dir = C:\mysql\data/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 384M #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 100M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 64M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [myisamchk] key_buffer_size = 256M sort_buffer_size = 256M read_buffer = 8M write_buffer = 8M [mysqlhotcopy] interactive-timeout

    Read the article

  • LDAP query on linux against AD returns groups with no members

    - by SethG
    I am using LDAP+kerberos to authenticate against Active Directory on Windows 2003 R2. My krb5.conf and ldap.conf appear to be correct (according to pretty much every sample I found on the 'net). I can login to the host with both password and ssh keys. When I run getent passwd, all my ldap user accounts are listed with all the important attributes. When I run getent group, all the ldap groups and their gid's are listed, but no group members. If I run ldapsearch and filter on any group, the members are all listed with the "member" attribute. So the data is there for the taking, it's just not being parsed properly. It would appear that I simply am using an incorrect mapping in ldap.conf, but I can't see it. I've tried several variations and all give the same result. Here is my current ldap.conf: host <ad-host1-ip> <ad-host2-ip> base dc=my,dc=full,dc=dn uri ldap://<ad-host1> ldap://<ad-host2> ldap_version 3 binddn <mybinddn> bindpw <mybindpw> scope sub bind_policy hard nss_reconnect_tries 3 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 3 nss_map_objectclass posixAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute gidNumber msSFU30GidNumber nss_map_attribute uidNumber msSFU30UidNumber nss_map_attribute cn cn nss_map_attribute gecos displayName nss_map_attribute homeDirectory msSFU30HomeDirectory nss_map_attribute loginShell msSFU30LoginShell nss_map_attribute uniqueMember member pam_filter objectcategory=User pam_login_attribute sAMAccountName pam_member_attribute member pam_password ad Here's the kicker: this config works 100% fine on a different linux box with a different distro. It does not work on the distro I am planning on switching to. I have installed from source the versions of pam_ldap and nss_ldap on the new box to match the old box, which fixed another problem I was having with this setup. Other relevant info is the original AD box was Windows 2003. It's mirror died a horrible hardware death so I'm trying to add two more 2003-R2 servers to the mirror tree and ultimately drop the old 2003 box. The new R2 boxes appear to have joined the DC forest properly. What do I need to do to get groups working? I've exhausted all the resources I could find and need a different angle. Any input is appreciated. Status update, 7/31/09 I have managed to tweak my config file to get full info from the AD and performance is nice and snappy. I replaced the back-rev'd copies of pam_ldap and nss_ldap with the current ones for the distro I'm using, so it's back to a standard out-of-the-box install. Here's my current config: host <ad-host1-ip> <ad-host2-ip> base dc=my,dc=full,dc=dn uri ldap://<ad-host1> ldap://<ad-host2> ldap_version 3 binddn <mybinddn> bindpw <mybindpw> scope sub bind_policy soft nss_reconnect_tries 3 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 3 nss_connect_policy oneshot referrals no nss_map_objectclass posixAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute gidNumber msSFU30GidNumber nss_map_attribute uidNumber msSFU30UidNumber nss_map_attribute cn cn nss_map_attribute gecos displayName nss_map_attribute homeDirectory msSFU30HomeDirectory nss_map_attribute loginShell msSFU30LoginShell nss_map_attribute uniqueMember member pam_filter objectcategory=CN=Person,CN=Schema,CN=Configuration,DC=w2k,DC=cis,DC=ksu,DC=edu pam_login_attribute sAMAccountName pam_member_attribute member pam_password ad ssl off tls_checkpeer no sasl_secprops maxssf=0 The remaining problem now is when you run the groups command, not all subscribed groups are listed. Some are (one or two), but not all. Group memberships are still honored, such as file and printer access. getent group foo still shows that the user is a member of group foo. So it appears to be a presentation bug, and does not interfere with normal operation. It also appears that some (I have not determined exactly how many) group searches do not resolve correctly, even though the group is listed. eg, when you run "getent group bar", nothing is returned, but if you run "getent group|grep bar" or "getent group|grep <bar_gid>" you can see that it indeed listed and your group name and gid are correct. This still seems like an LDAP search or mapping error, but I can't figure out what it is. I'm a heckuva lot closer than earlier in the week, but I'd really like to get this last detail ironed out.

    Read the article

  • Launching Intent.ACTION_VIEW intent not working on saved image file

    - by Savvas Dalkitsis
    First of all let me say that this questions is slightly connected to another question by me. Actually it was created because of that. I have the following code to write a bitmap downloaded from the net to a file in the sd card: // Get image from url URL u = new URL(url); HttpGet httpRequest = new HttpGet(u.toURI()); HttpClient httpclient = new DefaultHttpClient(); HttpResponse response = (HttpResponse) httpclient.execute(httpRequest); HttpEntity entity = response.getEntity(); BufferedHttpEntity bufHttpEntity = new BufferedHttpEntity(entity); InputStream instream = bufHttpEntity.getContent(); Bitmap bmImg = BitmapFactory.decodeStream(instream); instream.close(); // Write image to a file in sd card File posterFile = new File(Environment.getExternalStorageDirectory().getAbsolutePath()+"/Android/data/com.myapp/files/image.jpg"); posterFile.createNewFile(); BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(posterFile)); Bitmap mutable = Bitmap.createScaledBitmap(bmImg,bmImg.getWidth(),bmImg.getHeight(),true); mutable.compress(Bitmap.CompressFormat.JPEG, 100, out); out.flush(); out.close(); // Launch default viewer for the file Intent intent = new Intent(); intent.setAction(android.content.Intent.ACTION_VIEW); intent.setDataAndType(Uri.parse(posterFile.getAbsolutePath()),"image/*"); ((Activity) getContext()).startActivity(intent); A few notes. I am creating the "mutable" bitmap after seeing someone using it and it seems to work better than without it. And i am using the parse method on the Uri class and not the fromFile because in my code i am calling these in different places and when i am creating the intent i have a string path instead of a file. Now for my problem. The file gets created. The intent launches a dialog asking me to select a viewer. I have 3 viewers installed. The Astro image viewer, the default media gallery (i have a milstone on 2.1 but on the milestone the 2.1 update did not include the 3d gallery so it's the old one) and the 3d gallery from the nexus one (i found the apk in the wild). Now when i launch the 3 viewers the following happen: Astro image viewer: The activity launches but i see nothing but a black screen. Media Gallery: i get an exception dialog shown "The application Media Gallery (process com.motorola.gallery) has stopped unexpectedly. Please try again" with a force close option. 3D gallery: Everything works as it should. When i try to simply open the file using the Astro file manager (browse to it and simply click) i get the same option dialog but this time things are different: Astro image viewer: Everything works as it should. Media Gallery: Everything works as it should. 3D gallery: The activity launches but i see nothing but a black screen. As you can see everything is a complete mess. I have no idea why this happens but it happens like this every single time. It's not a random bug. Am i missing something when i am creating the intent? Or when i am creating the image file? Any ideas?

    Read the article

  • All downloads being interrupted

    - by Jake
    System: Windows 7 Professional 64bit. 8GB RAM, Intel i5-2400 CPU, +300GB free on the hard drive. AVG Internet Security 2012 (enabled & disabled, with firewall enabled and disabled - no effect for either). This computer is less than a year old. Network: This problem is occurring on a single computer on a network with multiple computers. The router is a Motorola Netopia 3347-02 (DSL Modem/Wireless Router combined). The computer is plugged in directly to the modem, other computers are using the wireless successfully. The router has been reset. The only thing odd about the connection between the router and computer is that it is configured to allow RDP through, so it is assigned a static IP by the router and port forwarding is enabled for port 3389. Also, though I doubt it matters, a second wireless router is active behind this router providing a second network that some computers in the area use without issues. Details: All downloads initiated on this specific computer eventually fail, this includes streaming from youtube, specialized downloads (itunes), downloads from websites, FTP downloads, etc. Failure occurs with all browsers, but in chrome this is the process it takes: 1) Download begins normally, 2) At some point between (observed) 7MBs and 229MBs the download stops progressing (at this point, if watching chrome's task manager, you can see the network activity for the downloading tab drop to 0kps), 3) for some time the download sits there still attempting to complete, but will eventually display "123,049,871/0 B, Interrupted" (where the number is whatever it actually got to). The file I am using to test this is a very large .zip file located on a server I control, but the problem seems to occur on any site. The amount downloaded is completely random, and seems to be more time-based than anything (if I start a download immediately after the last one fails, it tends to get further than the last one). Small files can get through for this reason, though they can fail as well. In a test where I simultaneously downloaded the same file via HTTP (chrome) and FTP (windows explorer), both downloads failed at the same instant, though explorer displayed "Connection timed out" several minutes before chrome finally showed the download as interrupted. Other things I have tried based on advice given to people with similar/identical problems: Setting my MTU to 1492 (as described here: http://blog.thecompwiz.com/2011/08/networking-issues.html) Disabling write caching to the hard drive storing the download on an external device successfully transmitted +1GB file from one computer on the same network to this computer disabling indexing in the folder the download was being stored in disabling all security software checked to make sure all drivers were up to date read about 50 accounts with nearly exact descriptions of what I'm experiencing, none of which had a solution given Running Processes: Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 104,836 K smss.exe 332 Services 0 1,276 K csrss.exe 764 Services 0 5,060 K wininit.exe 820 Services 0 4,748 K csrss.exe 844 Console 1 23,764 K services.exe 876 Services 0 11,856 K lsass.exe 892 Services 0 14,420 K lsm.exe 900 Services 0 7,820 K winlogon.exe 944 Console 1 7,716 K svchost.exe 428 Services 0 12,744 K svchost.exe 796 Services 0 12,240 K svchost.exe 1036 Services 0 22,372 K svchost.exe 1084 Services 0 174,132 K svchost.exe 1112 Services 0 56,144 K svchost.exe 1288 Services 0 18,640 K svchost.exe 1404 Services 0 29,616 K spoolsv.exe 1576 Services 0 25,924 K svchost.exe 1616 Services 0 12,788 K AppleMobileDeviceService. 1728 Services 0 9,796 K avgwdsvc.exe 1820 Services 0 8,268 K mDNSResponder.exe 1844 Services 0 5,832 K w3dbsmgr.exe 1108 Services 0 43,760 K QBCFMonitorService.exe 1336 Services 0 16,408 K svchost.exe 2404 Services 0 28,240 K taskhost.exe 3020 Console 1 12,372 K dwm.exe 2280 Console 1 5,968 K explorer.exe 2964 Console 1 152,476 K WUDFHost.exe 3316 Services 0 6,740 K svchost.exe 3408 Services 0 5,556 K RAVCpl64.exe 3684 Console 1 13,864 K igfxtray.exe 3700 Console 1 7,804 K hkcmd.exe 3772 Console 1 7,868 K igfxpers.exe 3788 Console 1 10,940 K sidebar.exe 3836 Console 1 84,400 K chrome.exe 3964 Console 1 19,640 K pptd40nt.exe 4068 Console 1 5,156 K acrotray.exe 3908 Console 1 14,676 K avgtray.exe 3872 Console 1 9,508 K jusched.exe 4076 Console 1 4,412 K iTunesHelper.exe 1532 Console 1 87,308 K SearchIndexer.exe 3492 Services 0 36,948 K iPodService.exe 4136 Services 0 7,944 K BrccMCtl.exe 4276 Console 1 18,132 K splwow64.exe 4380 Console 1 32,600 K qbupdate.exe 4836 Console 1 24,236 K svchost.exe 4288 Services 0 20,700 K wmpnetwk.exe 3112 Services 0 9,516 K FNPLicensingService.exe 5248 Services 0 5,852 K QBW32.EXE 5508 Console 1 127,068 K QBDBMgrN.exe 5600 Services 0 42,252 K EXCEL.EXE 2512 Console 1 99,100 K LMS.exe 3188 Services 0 5,616 K UNS.exe 1600 Services 0 7,308 K axlbridge.exe 5260 Console 1 5,132 K chrome.exe 5888 Console 1 200,336 K chrome.exe 3536 Console 1 26,076 K chrome.exe 1952 Console 1 20,168 K chrome.exe 4596 Console 1 24,696 K chrome.exe 4292 Console 1 48,096 K chrome.exe 2796 Console 1 23,520 K Acrobat.exe 1240 Console 1 87,252 K 123w.exe 4892 Console 1 22,728 K calc.exe 1700 Console 1 12,636 K chrome.exe 1328 Console 1 28,888 K chrome.exe 3696 Console 1 47,012 K rundll32.exe 6320 Console 1 7,104 K chrome.exe 4928 Console 1 44,248 K AVGIDSAgent.exe 260 Services 0 12,940 K avgfws.exe 6052 Services 0 26,912 K avgnsa.exe 5064 Services 0 2,496 K avgrsa.exe 3088 Services 0 2,200 K avgcsrva.exe 2596 Services 0 380 K avgcsrva.exe 6948 Services 0 408 K StikyNot.exe 452 Console 1 14,772 K chrome.exe 4580 Console 1 28,200 K chrome.exe 4016 Console 1 57,756 K svchost.exe 7140 Services 0 4,500 K chrome.exe 6264 Console 1 56,824 K chrome.exe 7008 Console 1 56,896 K chrome.exe 2224 Console 1 38,032 K taskhost.exe 612 Console 1 7,228 K chrome.exe 6000 Console 1 10,928 K chrome.exe 2568 Console 1 43,052 K chrome.exe 272 Console 1 75,988 K chrome.exe 7328 Console 1 53,240 K PaprPort.exe 7976 Console 1 137,152 K pplinks.exe 7500 Console 1 14,052 K ppscanmg.exe 5744 Console 1 18,996 K taskeng.exe 7388 Console 1 6,308 K SearchProtocolHost.exe 8024 Services 0 8,804 K SearchFilterHost.exe 7232 Services 0 7,848 K chrome.exe 8016 Console 1 37,440 K cmd.exe 7692 Console 1 3,096 K conhost.exe 7516 Console 1 5,872 K tasklist.exe 8160 Console 1 5,772 K WmiPrvSE.exe 7684 Services 0 6,400 K Any help with this would be greatly appreciated, I've been beating my head against a wall over this all day. This computer serves dual purpose as the main company document server and the Owner's work computer, it's fairly important it be fully functional and I cannot figure this out.

    Read the article

  • Cyrus on CentOS with sasl / pam / ldap

    - by Oscar
    SASL/PAM/LDAP is driving me crazy... that's what I read a lot when googling for problems in this area, and what I experience myself :-S I'm trying to get Cyrus imap working for virtual hosting on CentOS with this authorisation backend and really don't know what's happening. In saslauthd I configured the LDAP search filter to use, but it looks like pam completely ignores it. Here's what I do for testing (done more tests but all with similar results): [root@testserv ~]# imtest -u [email protected] -a [email protected] WARNING: no hostname supplied, assuming localhost S: * OK [CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS] testserv. Cyrus IMAP4 v2.3.7-Invoca-RPM-2.3.7-7.el5_6.4 server ready C: C01 CAPABILITY S: * CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS ACL RIGHTS=kxte QUOTA MAILBOX-REFERRALS NAMESPACE UIDPLUS NO_ATOMIC_RENAME UNSELECT CHILDREN MULTIAPPEND BINARY SORT SORT=MODSEQ THREAD=ORDEREDSUBJECT THREAD=REFERENCES ANNOTATEMORE CATENATE CONDSTORE IDLE LISTEXT LIST-SUBSCRIBED X-NETSCAPE URLAUTH S: C01 OK Completed Please enter your password: C: L01 LOGIN [email protected] {6} S: + go ahead C: <omitted> S: L01 NO Login failed: authentication failure Authentication failed. generic failure Security strength factor: 0 C: Q01 LOGOUT * BYE LOGOUT received Q01 OK Completed Connection closed. The LDAP entry does exist (and so does the mailbox in Cyrus): [root@testserv ~]# ldapsearch -WxD cn=Manager,o=mydomain,c=com [email protected] Enter LDAP Password: # extended LDIF # # LDAPv3 # base <> with scope subtree # filter: [email protected] # requesting: ALL # # myuser, accounts, testserv.mydomain.com, mydomain, com dn: uid=myuser,ou=accounts,dc=testserv.mydomain.com,o=mydomain,c=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uidNumber: 16 uid: myuser gidNumber: 5 givenName: My sn: Name mail: [email protected] cn: My Name userPassword:: dYN5ebB0fXhNRn1pZllhRnJX7Uk= shadowLastChange: 15176 homeDirectory: /dev/null # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 This is what I get in /var/log/messages Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] ... /var/adm/auth.log Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:11 testserv cyrus/imap[12514]: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb Aug 2 04:00:19 testserv saslauthd[5926]: DEBUG: auth_pam: pam_authenticate failed: User not known to the underlying authentication module Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] (AFAIK I can ignore the auxprop msg) ... and /var/log/slapd.log: Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 ACCEPT from IP=127.0.0.1:51403 (IP=0.0.0.0:389) Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 BIND dn="" method=128 Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 RESULT tag=97 err=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SRCH base="o=mydomain,c=com" scope=2 deref=0 filter="([email protected])" Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=2 UNBIND Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 closed These are the settings in In /etc/imapd.conf: sasl_mech_list: PLAIN LOGIN sasl_pwcheck_method: saslauthd ## sasl_auxprop_plugin: sasldb sasl_auto_transition: no and my sasl config: [root@testserv ~]# cat /etc/sysconfig/saslauthd # Directory in which to place saslauthd's listening socket, pid file, and so # on. This directory must already exist. SOCKETDIR=/var/run/saslauthd # Mechanism to use when checking passwords. Run "saslauthd -v" to get a list # of which mechanism your installation was compiled with the ablity to use. MECH=pam # Additional flags to pass to saslauthd on the command line. See saslauthd(8) # for the list of accepted flags. FLAGS="-c -r -O /etc/saslauthd.conf" [root@testserv ~]# cat /etc/saslauthd.conf ldap_servers: ldap://127.0.0.1/ ldap_search_base: dc=%d,o=mydomain,c=com ldap_auth_method: bind #ldap_filter: (|(uid=%u)((&(mail=%u@%d)(accountStatus=active))) ldap_filter: (&(mail=%u@%d)(accountStatus=active)) ldap_debug: 1 ldap_version: 3 The accountStatus=active is not in ldap yet, but that doesn't make a difference since I don't see it in the filter... that's not the reason for the failure. The weird thing is, I do get an error when I rename or remove /etc/saslauthd.conf, but when the file exists it seems happily ignored... The filter in slapd.log seems to be taken from /etc/ldap.conf. Apart from some timers, that only contains: host 127.0.0.1 base o=mydomain,c=com pam_login_attribute mail Outcommenting the pam_login_attribute results in this filter in slapd.log: filter="([email protected])" Pam-imap looks like this: [root@testserv ~]# cat /etc/pam.d/imap auth required pam_ldap.so debug account required pam_ldap.so debug #auth sufficient pam_unix.so likeauth nullok #auth sufficient pam_ldap.so use_first_pass #auth required pam_deny.so #account sufficient pam_unix.so #account sufficient pam_ldap.so The outcommented stuff is because I don't have the cyrus admin user in Ldap; that's a Linux user. That works fine when uncommented, but I still need to play around with that a little and first I wanna get imap working. Finally nsswitch: [root@testserv ~]# cat /etc/nsswitch.conf # # /etc/nsswitch.conf # # An example Name Service Switch config file. This file should be # sorted with the most-used services at the beginning. # # The entry '[NOTFOUND=return]' means that the search for an # entry should stop if the search in the previous entry turned # up nothing. Note that if the search failed due to some other reason # (like no NIS server responding) then the search continues with the # next entry. # # Legal entries are: # # nisplus or nis+ Use NIS+ (NIS version 3) # nis or yp Use NIS (NIS version 2), also called YP # dns Use DNS (Domain Name Service) # files Use the local files # db Use the local database (.db) files # compat Use NIS on compat mode # hesiod Use Hesiod for user lookups # [NOTFOUND=return] Stop searching if not found so far # # To use db, put the "db" in front of "files" for entries you want to be # looked up first in the databases # # Example: #passwd: db files nisplus nis #shadow: db files nisplus nis #group: db files nisplus nis passwd: compat ldap group: compat ldap shadow: compat ldap hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: nisplus publickey: nisplus automount: files nisplus aliases: files nisplus Any info where to start looking will be greatly appreciated! Thnx in advance

    Read the article

  • Running in to some issues with Tumblr's Theme Parser

    - by Kylee
    Below part of a tumblr theme I'm working on and you will notice that I use the {block:PostType} syntax to declare the opening tag of each post, which is a <li> element in an Ordered List. This allows me to not only dynamicly set the li's class based on the type of post but cuts down on the number of times I'm calling the ShareThis JS which was really bogging down the page. This creates a new issue though which I believe is a flaw in Tumblr's parser. Each post is an ordered list with one <li> element in it. I know I could solve this by having each post as a <div> but I really like the control and semantics of using a list. Tumblr gurus? Suggestions? Sample of code: {block:Posts} <ol class="posts"> {block:Text} <li class="post type_text" id="{PostID}"> {block:Title} <h2><a href="{Permalink}" title="Go to post '{Title}'.">{Title}</a></h2> {/block:Title} {Body} {/block:Text} {block:Photo} <li class="post type_photo" id="{PostID}"> <div class="image"> <a href="{LinkURL}"><img src="{PhotoURL-500}" alt="{PhotoAlt}"></a> </div> {block:Caption} {Caption} {/block:Caption} {/block:Photo} {block:Photoset} <li class="post type_photoset" id="{PostID}"> {Photoset-500} {block:Caption} {Caption} {/block:Caption} {/block:Photoset} {block:Quote} <li class="post type_quote" id="{PostID}"> <blockquote> <div class="quote_symbol">&ldquo;</div> {Quote} </blockquote> {block:Source} <div class="quote_source">{Source}</div> {/block:Source} {/block:Quote} {block:Link} <li class="post type_link" id="{PostID}"> <h2><a href="{URL}" {Target} title="Go to {Name}.">{Name}</a></h2> {block:Description} {Description} {/block:Description} {/block:Link} {block:Chat} <li class="post type_chat" id="{PostID}"> {block:Title} <h2><a href="{Permalink}" title="Go to post {PostID} '{Title}'.">{Title}</a></h2> {/block:Title} <table class="chat_log"> {block:Lines} <tr class="{Alt} user_{UserNumber}"> <td class="person">{block:Label}{Label}{/block:Label}</td> <td class="message">{Line}</td> </tr> {/block:Lines} </table> {/block:Chat} {block:Video} <li class="post type_video" id="{PostID}"> {Video-500} {block:Caption} {Caption} {/block:Caption} {/block:Video} {block:Audio} <li class="post type_audio" id="{PostID}"> {AudioPlayerWhite} {block:Caption} {Caption} {/block:Caption} {block:ExternalAudio} <p><a href="{ExternalAudioURL}" title="Download '{ExternalAudioURL}'">Download</a></p> {/block:ExternalAudio} {/block:Audio} <div class="post_footer"> <p class="post_date">Posted on {ShortMonth} {DayOfMonth}, {Year} at {12hour}:{Minutes} {AmPm}</p> <ul> <li><a class="comment_link" href="{Permalink}#disqus_thread">Comments</a></li> <li><script type="text/javascript" src="http://w.sharethis.com/button/sharethis.js#publisher=722e181d-1d8a-4363-9ebe-82d5263aea94&amp;type=website"></script></li> </ul> {block:PermalinkPage} <div id="disqus_thread"></div> <script type="text/javascript" src="http://disqus.com/forums/kyleetilley/embed.js"></script> <noscript><a href="http://disqus.com/forums/kyleetilley/?url=ref">View the discussion thread.</a></noscript> {/block:PermalinkPage} </div> </li> </ol> {/block:Posts}

    Read the article

  • Postfix configuration - Uing virtual min but server is bouncing back my mail.

    - by brodiebrodie
    I have no experience in setting up postfix, and thought virtualmin minght do the legwork for me. Appears not. When I try to send mail to the domain (either [email protected] [email protected] or [email protected]) I get the following message returned This is the mail system at host dedq239.localdomain. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]> (expanded from <[email protected]>): User unknown in virtual alias table Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 Diagnostic-Code: X-Postfix; User unknown in virtual alias table How can I diagnose the problem here? It seems that the mail gets to my server but the server fails to locally deliver the message to the correct user. (This is a guess, truthfully I have no idea what is happening). I have checked my virtual alias table and it seems to be set up correctly (I can post if this would be helpful). Can anyone give me a clue as to the next step? Thanks alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no local_recipient_maps = $virtual_mailbox_maps mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination smtpd_sasl_auth_enable = yes soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual My mail log file (the last entry) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 207C6B18158: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: from=<[email protected]>, size=1805, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/error[7238]: 207C6B18158: to=<[email protected]>, orig_to=<[email protected]>, relay=none, delay=0.64, delays=0.61/0.01/0/0.02, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 8DC13B18169: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 8DC13B18169: from=<>, size=3691, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/bounce[7239]: 207C6B18158: sender non-delivery notification: 8DC13B18169 Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: removed Sep 30 15:13:48 dedq239 postfix/smtp[7240]: 8DC13B18169: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.216.55]:25, delay=1.3, delays=0.02/0.01/0.58/0.75, dsn=2.0.0, status=sent (250 2.0.0 OK 1254348828 36si15082901pxi.91) Sep 30 15:13:48 dedq239 postfix/qmgr[7177]: 8DC13B18169: removed Sep 30 15:14:17 dedq239 postfix/smtpd[7233]: disconnect from mail-bw0-f228.google.com[209.85.218.228] etc.aliases file below I have not touched this file - myvirtualdomain is a replacement for my real domain name # Aliases in this file will NOT be expanded in the header from # Mail, but WILL be visible over networks or from /bin/mail. # # >>>>>>>>>> The program "newaliases" must be run after # >> NOTE >> this file is updated for any changes to # >>>>>>>>>> show through to sendmail. # # Basic system aliases -- these MUST be present. mailer-daemon: postmaster postmaster: root # General redirections for pseudo accounts. bin: root daemon: root adm: root lp: root sync: root shutdown: root halt: root mail: root news: root uucp: root operator: root games: root gopher: root ftp: root nobody: root radiusd: root nut: root dbus: root vcsa: root canna: root wnn: root rpm: root nscd: root pcap: root apache: root webalizer: root dovecot: root fax: root quagga: root radvd: root pvm: root amanda: root privoxy: root ident: root named: root xfs: root gdm: root mailnull: root postgres: root sshd: root smmsp: root postfix: root netdump: root ldap: root squid: root ntp: root mysql: root desktop: root rpcuser: root rpc: root nfsnobody: root ingres: root system: root toor: root manager: root dumper: root abuse: root newsadm: news newsadmin: news usenet: news ftpadm: ftp ftpadmin: ftp ftp-adm: ftp ftp-admin: ftp www: webmaster webmaster: root noc: root security: root hostmaster: root info: postmaster marketing: postmaster sales: postmaster support: postmaster # trap decode to catch security attacks decode: root # Person who should get root's mail #root: marc abuse-myvirtualdomain.com: [email protected] My etc/postfix/virtual file is below - again myvirtualdomain is a replacement. I think this file was generated by Virtualmin and I have tried messing around with is with no success... This is the version without my changes. myunixusername@myvirtualdomain .com myunixusername myvirtualdomain .com myvirtualdomain.com [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

    Read the article

  • Is this question too hard for a seasoned C++ architect?

    - by Monomer
    Background Information We're looking to hire a seasoned C++ architect (10+years dev, of which at least 6years must be C++ ) for a high frequency trading platform. Job advert says STL, Boost proficiency is a must with preferences to modern uses of C++. The company I work for is a Fortune 500 IB (aka finance industry), it requires passes in all the standard SHL tests (numeric, vocab, spatial etc) before interviews can commence. Everyone on the team was given the task of coming up with one question to ask the candidates during a written/typed test, please note this is the second test provided to the candidates, the first being Advanced IKM C++ test, done in the offices supervised and without internet access. People passing that do the second test. After roughly 70 candidates, my question has been determined to be statistically the worst performing - aka least number of people attempted it, furthermore even less people were able to give meaningful answers. Please note, the second test is not timed, the candidate can literally take as long as they like (we've had one person take roughly 10.5hrs) My question to SO is this, after SHL and IKM adv c++ tests, backed up with at least 6+ years C++ development experience, is it still ok not to be able to even comment about let alone come up with some loose strategy for solving the following question. The Question There is a class C with methods foo, boo, boo_and_foo and foo_and_boo. Each method takes i,j,k and l clock cycles respectively, where i < j, k < i+j and l < i+j. class C { public: int foo() {...} int boo() {...} int boo_and_foo() {...} int foo_and_boo() {...} }; In code one might write: C c; . . int i = c.foo() + c.boo(); But it would be better to have: int i = c.foo_and_boo(); What changes or techniques could one make to the definition of C, that would allow similar syntax of the original usage, but instead have the compiler generate the latter. Note that foo and boo are not commutative. Possible Solution We were basically looking for an expression templates based approach, and were willing to give marks to anyone who had even hinted or used the phrase or related terminology. We got only two people that used the wording, but weren't able to properly describe how they accomplish the task in detail. We use such techniques all over the place, due to the use of various mathematical operators for matrix and vector based calculations, for example to decide when to use IPP or hand woven implementations at compile time for a particular architecture and many other things. The particular area of software development requires microsecond response times. I believe could/should be able to teach a junior such techniques, but given the assumed caliber of candidates I expected a little more. Is this really a difficult question? Should it be removed? Or are we just not seeing the right candidates?

    Read the article

  • Converting AES encryption token code in C# to php

    - by joey
    Hello, I have the following .Net code which takes two inputs. 1) A 128 bit base 64 encoded key and 2) the userid. It outputs the AES encrypted token. I need the php equivalent of the same code, but dont know which corresponding php classes are to be used for RNGCryptoServiceProvider,RijndaelManaged,ICryptoTransform,MemoryStream and CryptoStream. Im stuck so any help regarding this would be really appreciated. using System; using System.Text; using System.IO; using System.Security.Cryptography; class AESToken { [STAThread] static int Main(string[] args) { if (args.Length != 2) { Console.WriteLine("Usage: AESToken key userId\n"); Console.WriteLine("key Specifies 128-bit AES key base64 encoded supplied by MediaNet to the partner"); Console.WriteLine("userId specifies the unique id"); return -1; } string key = args[0]; string userId = args[1]; StringBuilder sb = new StringBuilder(); // This example code uses the magic string “CAMB2B”. The implementer // must use the appropriate magic string for the web services API. sb.Append("CAMB2B"); sb.Append(args[1]); // userId sb.Append('|'); // pipe char sb.Append(System.DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ssUTC")); //timestamp Byte[] payload = Encoding.ASCII.GetBytes(sb.ToString()); byte[] salt = new Byte[16]; // 16 bytes of random salt RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); rng.GetBytes(salt); // the plaintext is 16 bytes of salt followed by the payload. byte[] plaintext = new byte[salt.Length + payload.Length]; salt.CopyTo(plaintext, 0); payload.CopyTo(plaintext, salt.Length); // the AES cryptor: 128-bit key, 128-bit block size, CBC mode RijndaelManaged cryptor = new RijndaelManaged(); cryptor.KeySize = 128; cryptor.BlockSize = 128; cryptor.Mode = CipherMode.CBC; cryptor.GenerateIV(); cryptor.Key = Convert.FromBase64String(args[0]); // the key byte[] iv = cryptor.IV; // the IV. // do the encryption ICryptoTransform encryptor = cryptor.CreateEncryptor(cryptor.Key, iv); MemoryStream ms = new MemoryStream(); CryptoStream cs = new CryptoStream(ms, encryptor, CryptoStreamMode.Write); cs.Write(plaintext, 0, plaintext.Length); cs.FlushFinalBlock(); byte[] ciphertext = ms.ToArray(); ms.Close(); cs.Close(); // build the token byte[] tokenBytes = new byte[iv.Length + ciphertext.Length]; iv.CopyTo(tokenBytes, 0); ciphertext.CopyTo(tokenBytes, iv.Length); string token = Convert.ToBase64String(tokenBytes); Console.WriteLine(token); return 0; } } Please help. Thank You.

    Read the article

  • Push Trunk or Push Branch to Production

    - by coffeeaddict
    I'd like to get an idea of some processes on build process with Tortoise SVN. Primarily I'm wondering do you push: The Mainline Trunk A branch after QA has grabbed it into a working copy locally and tested the branch and then some build pushes that branch The problem I have is I work at a craphole (hey, it is what it is and I'm venting on stackoverflow, you better believe it..good way to relieve stress due to complete utter chaos) and we have no formal process for pushing anything. In fact even worse my boss directly codes against production. When I have changes, he pushes the mainline trunk. The problem becomes when I make database changes on our Dev database for lets say Branch A. Well...that breaks Branch B and C. I have like 4 projects going on at once! Why? Well, I will not get into that (chaos). So consequently I rename a table field, or add a field or whatever in SQL Server and walla, now my other branches have stale code pointing to previous field names. So what happens? I have to merge certain changes to this branch, to that branch, etc. It feels like a war zone. Finally, what happens is I try to only merge the minimum. Lets say I made DB changes for Branch A's code but now I had to jump back on Branch B's project. Well I need to merge "some" of A's changes over for those database changes so that B's code is not going to bomb out and is able to work with the new table changes. Finally boss pushes the mainline trunk to production. Now I get an email "you forgot to remove the hyperlink for this". That hyperlink was actually a feature I added in Branch A. But what he's talking about here is he just pushed the mainline trunk to production which now has my merged changes from Branch B and any database scripts for Branch A because remember I had some DB changes and if he pushes code, it's got to reflect those changes thus some partial database changes must also be pushed even if it's not related to this project. Well...I missed the hyperlink, so kill me. Maybe that's why we need a build process boss? (sorry, it's been a nightmare working here which is why this thread is getting so detailed). Anyway, obviously this is a nightmare. And he dictates almost everything. The only reason we have source control is because I've worked on hard core teams and that's the first thing you setup. Well there was none here. Problem is I can't dictate the structure..he does but he's never really used source control!! My God. So we have no QA. This is an e-commerce website. That's another huge issue. So consequently I'm expected to be perfect. That means mainline trunk needs to be perfect for whatever we're pushing, whatever branch feature. Is this luda? wtf do I do? I could go off on him after tying so many times tactfully to explain that we need a freakin build process (not just copy local mainline trunk to production!) but I've tried to push before and got yelled at. So I gave up on that. So it will help me tremendously to know how others are pushing their source from Tortoise to production. I was not the person pushing when I was on previous teams so really I'm not too versed in build processes. We are a fairly good size e-commerce site and get a couple millions hits a month.

    Read the article

  • SSH service will not start on fresh Cygwin 1.7.15 install

    - by Coder6841
    OS: Windows 7 x64 Cygwin: 1.7.15-1 OpenSSH: 6.0p1-1 I'm attempting to install an SSH server on Windows 7. The tutorial that I'm following to do this is here: http://www.howtogeek.com/howto/41560/how-to-get-ssh-command-line-access-to-windows-7-using-cygwin/ The issue is that upon executing the net start sshd command I get the following output:The CYGWIN sshd service is starting. The CYGWIN sshd service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. Here is the full output of the setup: AdminUser@ThisComputer ~ $ ssh-host-config *** Info: Generating /etc/ssh_host_key *** Info: Generating /etc/ssh_host_rsa_key *** Info: Generating /etc/ssh_host_dsa_key *** Info: Generating /etc/ssh_host_ecdsa_key *** Info: Creating default /etc/ssh_config file *** Info: Creating default /etc/sshd_config file *** Info: Privilege separation is set to yes by default since OpenSSH 3.3. *** Info: However, this requires a non-privileged account called 'sshd'. *** Info: For more info on privilege separation read /usr/share/doc/openssh/README.privsep. *** Query: Should privilege separation be used? (yes/no) yes *** Info: Note that creating a new user requires that the current account have *** Info: Administrator privileges. Should this script attempt to create a *** Query: new local account 'sshd'? (yes/no) yes *** Info: Updating /etc/sshd_config file *** Query: Do you want to install sshd as a service? *** Query: (Say "no" if it is already installed as a service) (yes/no) yes *** Query: Enter the value of CYGWIN for the daemon: [] *** Info: On Windows Server 2003, Windows Vista, and above, the *** Info: SYSTEM account cannot setuid to other users -- a capability *** Info: sshd requires. You need to have or to create a privileged *** Info: account. This script will help you do so. *** Info: You appear to be running Windows XP 64bit, Windows 2003 Server, *** Info: or later. On these systems, it's not possible to use the LocalSystem *** Info: account for services that can change the user id without an *** Info: explicit password (such as passwordless logins [e.g. public key *** Info: authentication] via sshd). *** Info: If you want to enable that functionality, it's required to create *** Info: a new account with special privileges (unless a similar account *** Info: already exists). This account is then used to run these special *** Info: servers. *** Info: Note that creating a new user requires that the current account *** Info: have Administrator privileges itself. *** Info: No privileged account could be found. *** Info: This script plans to use 'cyg_server'. *** Info: 'cyg_server' will only be used by registered services. *** Query: Do you want to use a different name? (yes/no) no *** Query: Create new privileged user account 'cyg_server'? (yes/no) yes *** Info: Please enter a password for new user cyg_server. Please be sure *** Info: that this password matches the password rules given on your system. *** Info: Entering no password will exit the configuration. *** Query: Please enter the password: *** Query: Reenter: *** Info: User 'cyg_server' has been created with password '[CENSORED]'. *** Info: If you change the password, please remember also to change the *** Info: password for the installed services which use (or will soon use) *** Info: the 'cyg_server' account. *** Info: Also keep in mind that the user 'cyg_server' needs read permissions *** Info: on all users' relevant files for the services running as 'cyg_server'. *** Info: In particular, for the sshd server all users' .ssh/authorized_keys *** Info: files must have appropriate permissions to allow public key *** Info: authentication. (Re-)running ssh-user-config for each user will set *** Info: these permissions correctly. [Similar restrictions apply, for *** Info: instance, for .rhosts files if the rshd server is running, etc]. *** Info: The sshd service has been installed under the 'cyg_server' *** Info: account. To start the service now, call `net start sshd' or *** Info: `cygrunsrv -S sshd'. Otherwise, it will start automatically *** Info: after the next reboot. *** Info: Host configuration finished. Have fun! AdminUser@ThisComputer ~ $ net start sshd The CYGWIN sshd service is starting. The CYGWIN sshd service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. Note that on the line *** Query: Enter the value of CYGWIN for the daemon: [] I haven't entered anything. Tutorials often say to use ntsec or ntsec tty here but those options are removed from the latest version of OpenSSH. I've tried using them anyway and the result is the same. The file /var/log/sshd.log is empty. If I try just running the command /usr/sbin/sshd I get the output /var/empty must be owned by root and not group or world-writable.. The /var/empty directory has the following permissions: drwxr-xr-x+ 1 cyg_server root 0 May 29 15:28 empty. Google searches on this error did not turn up any working fixes. One person seems to have solved it by using the command chown SYSTEM /var/empty but that did not fix it in my case.

    Read the article

  • How to use pthread_atfork() and pthread_once() to reinitialize mutexes in child processes

    - by Blair Zajac
    We have a C++ shared library that uses ZeroC's Ice library for RPC and unless we shut down Ice's runtime, we've observed child processes hanging on random mutexes. The Ice runtime starts threads, has many internal mutexes and keeps open file descriptors to servers. Additionally, we have a few of mutexes of our own to protect our internal state. Our shared library is used by hundreds of internal applications so we don't have control over when the process calls fork(), so we need a way to safely shutdown Ice and lock our mutexes while the process forks. Reading the POSIX standard on pthread_atfork() on handling mutexes and internal state: Alternatively, some libraries might have been able to supply just a child routine that reinitializes the mutexes in the library and all associated states to some known value (for example, what it was when the image was originally executed). This approach is not possible, though, because implementations are allowed to fail *_init() and *_destroy() calls for mutexes and locks if the mutex or lock is still locked. In this case, the child routine is not able to reinitialize the mutexes and locks. On Linux, the this test C program returns EPERM from pthread_mutex_unlock() in the child pthread_atfork() handler. Linux requires adding _NP to the PTHREAD_MUTEX_ERRORCHECK macro for it to compile. This program is linked from this good thread. Given that it's technically not safe or legal to unlock or destroy a mutex in the child, I'm thinking it's better to have pointers to mutexes and then have the child make new pthread_mutex_t on the heap and leave the parent's mutexes alone, thereby having a small memory leak. The only issue is how to reinitialize the state of the library and I'm thinking of reseting a pthread_once_t. Maybe because POSIX has an initializer for pthread_once_t that it can be reset to its initial state. #include <pthread.h> #include <stdlib.h> #include <string.h> static pthread_once_t once_control = PTHREAD_ONCE_INIT; static pthread_mutex_t *mutex_ptr = 0; static void setup_new_mutex() { mutex_ptr = malloc(sizeof(*mutex_ptr)); pthread_mutex_init(mutex_ptr, 0); } static void prepare() { pthread_mutex_lock(mutex_ptr); } static void parent() { pthread_mutex_unlock(mutex_ptr); } static void child() { // Reset the once control. pthread_once_t once = PTHREAD_ONCE_INIT; memcpy(&once_control, &once, sizeof(once_control)); setup_new_mutex(); } static void init() { setup_new_mutex(); pthread_atfork(&prepare, &parent, &child); } int my_library_call(int arg) { pthread_once(&once_control, &init); pthread_mutex_lock(mutex_ptr); // Do something here that requires the lock. int result = 2*arg; pthread_mutex_unlock(mutex_ptr); return result; } In the above sample in the child() I only reset the pthread_once_t by making a copy of a fresh pthread_once_t initialized with PTHREAD_ONCE_INIT. A new pthread_mutex_t is only created when the library function is invoked in the child process. This is hacky but maybe the best way of dealing with this skirting the standards. If the pthread_once_t contains a mutex then the system must have a way of initializing it from its PTHREAD_ONCE_INIT state. If it contains a pointer to a mutex allocated on the heap than it'll be forced to allocate a new one and set the address in the pthread_once_t. I'm hoping it doesn't use the address of the pthread_once_t for anything special which would defeat this. Searching comp.programming.threads group for pthread_atfork() shows a lot of good discussion and how little the POSIX standards really provides to solve this problem. There's also the issue that one should only call async-signal-safe functions from pthread_atfork() handlers, and it appears the most important one is the child handler, where only a memcpy() is done. Does this work? Is there a better way of dealing with the requirements of our shared library?

    Read the article

  • Exposing model object using bindings in custom NSCell of NSTableView

    - by Hooligancat
    I am struggling trying to perform what I would think would be a relatively common task. I have an NSTableView that is bound to it's array via an NSArrayController. The array controller has it's content set to an NSMutableArray that contains one or more NSObject instances of a model class. What I don't know how to do is expose the model inside the NSCell subclass in a way that is bindings friendly. For the purpose of illustration, we'll say that the object model is a person consisting of a first name, last name, age and gender. Thus the model would appear something like this: @interface PersonModel : NSObject { NSString * firstName; NSString * lastName; NSString * gender; int * age; } Obviously the appropriate setters, getters init etc for the class. In my controller class I define an NSTableView, NSMutableArray and an NSArrayController: @interface ControllerClass : NSObject { IBOutlet NSTableView * myTableView; NSMutableArray * myPersonArray; IBOutlet NSArrayController * myPersonArrayController; } Using Interface Builder I can easily bind the model to the appropriate columns: myPersonArray --> myPersonArrayController --> table column binding This works fine. So I remove the extra columns, leaving one column hidden that is bound to the NSArrayController (this creates and keeps the association between each row and the NSArrayController) so that I am down to one visible column in my NSTableView and one hidden column. I create an NSCell subclass and put the appropriate drawing method to create the cell. In my awakeFromNib I establish the custom NSCell subclass: PersonModel * aCustomCell = [[[PersonModel alloc] init] autorelease]; [[myTableView tableColumnWithIdentifier:@"customCellColumn"] setDataCell:aCustomCell]; This, too, works fine from a drawing perspective. I get my custom cell showing up in the column and it repeats for every managed object in my array controller. If I add an object or remove an object from the array controller the table updates accordingly. However... I was under the impression that my PersonModel object would be available from within my NSCell subclass. But I don't know how to get to it. I don't want to set each NSCell using setters and getters because then I'm breaking the whole model concept by storing data in the NSCell instead of referencing it from the array controller. And yes I do need to have a custom NSCell, so having multiple columns is not an option. Where to from here? In addition to the Google and StackOverflow search, I've done the obligatory walk through on Apple's docs and don't seem to have found the answer. I have found a lot of references that beat around the bush but nothing involving an NSArrayController. The controller makes life very easy when binding to other elements of the model entity (such as a master/detail scenario). I have also found a lot of references (although no answers) when using Core Data, but Im not using Core Data. As per the norm, I'm very grateful for any assistance that can be offered!

    Read the article

  • curl multipart/form-data help

    - by user253530
    Hi am trying to post some data on a website using CURL. The posting process has 3 steps. 1. enter a URL, submit and get to the 2nd step with some fields already completed 2. submit again, after you entered some more data and preview the form. 3. submit the final data. The problem is that after the second step, the form data looks like this POSTDATA =-----------------------------12249266671528 Content-Disposition: form-data; name="title" Filme 2010, filme 2009, filme noi, programe TV, program cinema, premiere cinema, trailere filme - CineMagia.ro -----------------------------12249266671528 Content-Disposition: form-data; name="category" 3 -----------------------------12249266671528 Content-Disposition: form-data; name="tags" filme, programe tv, program cinema -----------------------------12249266671528 Content-Disposition: form-data; name="bodytext" Filme 2010, filme 2009, filme noi, programe TV, program cinema, premiere cinema, trailere filme -----------------------------12249266671528 Content-Disposition: form-data; name="trackback" -----------------------------12249266671528 Content-Disposition: form-data; name="url" http://cinemagia.ro -----------------------------12249266671528 Content-Disposition: form-data; name="phase" 2 -----------------------------12249266671528 Content-Disposition: form-data; name="randkey" 9510520 -----------------------------12249266671528 Content-Disposition: form-data; name="id" 17753 -----------------------------12249266671528-- I am stuck trying to devise an algorithm that will generate this kind of POST data for the second step. Just to mention the URL of the form never changes. It is always: http://www.xxx.com/submit. There is only a hidden input called "phase" that changes according to the step i am currently on (phase = 1, phase = 2, phase = 3). Any help, be it either code, pseudo-code or just guidance would be greatly appreciated. My code so far: function postBlvsocialbookmarkingcom($curl,$vars) { extract($vars); $baseUrl = "http://www.blv-socialbookmarking.com/"; //step 1: login $curl->setRedirect(); $page = $curl->post ($baseUrl.'login.php?return=/index.php', array ('username' => $username, 'password' => $password, 'processlogin' => '1', 'return' => '/index.php')); if ($err = $curl->getError ()) { return $err; } //post step 1---- //get random key $page = $curl->post($baseUrl.'/submit', array()); $randomKey = explode('<input type="hidden" name="randkey" value="',$page); $randKey = explode('"',$randomKey[1]); //------------------------------------- $page = $curl->post($baseUrl.'/submit', array('url'=>$address,'phase'=>'1','randkey'=>$randKey[0],'id'=>'c_1')); if ($err = $curl->getError ()) { return $err; } //echo $page; // //post step 2 $page = $curl->post ($baseUrl.'/submit', array ('title' => $title, 'category'=>'1', 'tags' => $tags, 'bodytext' => $description, 'phase' => '2')); if ($err = $curl->getError ()) { return $err; } echo $page; //post step 3 $page = $curl->post ($baseUrl.'/submit', array ('phase' => '3')); if ($err = $curl->getError ()) { return $err; } echo $page; }

    Read the article

  • AT91SAM7X512's SPI peripheral gets disabled on write to SPI_TDR

    - by Dor
    My AT91SAM7X512's SPI peripheral gets disabled on the X time (X varies) that I write to SPI_TDR. As a result, the processor hangs on the while loop that checks the TDRE flag in SPI_SR. This while loop is located in the function SPI_Write() that belongs to the software package/library provided by ATMEL. The problem occurs arbitrarily - sometimes everything works OK and sometimes it fails on repeated attempts (attemp = downloading the same binary to the MCU and running the program). Configurations are (defined in the order of writing): SPI_MR: MSTR = 1 PS = 0 PCSDEC = 0 PCS = 0111 DLYBCS = 0 SPI_CSR[3]: CPOL = 0 NCPHA = 1 CSAAT = 0 BITS = 0000 SCBR = 20 DLYBS = 0 DLYBCT = 0 SPI_CR: SPIEN = 1 After setting the configurations, the code verifies that the SPI is enabled, by checking the SPIENS flag. I perform a transmission of bytes as follows: const short int dataSize = 5; // Filling array with random data unsigned char data[dataSize] = {0xA5, 0x34, 0x12, 0x00, 0xFF}; short int i = 0; volatile unsigned short dummyRead; SetCS3(); // NPCS3 == PIOA15 while(i-- < dataSize) { mySPI_Write(data[i]); while((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); dummyRead = SPI_Read(); // SPI_Read() from Atmel's library } ClearCS3(); /**********************************/ void mySPI_Write(unsigned char data) { while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); AT91C_BASE_SPI0->SPI_TDR = data; while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TDRE) == 0); // <-- This is where // the processor hangs, because that the SPI peripheral is disabled // (SPIENS equals 0), which makes TDRE equal to 0 forever. } Questions: What's causing the SPI peripheral to become disabled on the write to SPI_TDR? Should I un-comment the line in SPI_Write() that reads the SPI_RDR register? Means, the 4th line in the following code: (The 4th line is originally marked as a comment) void SPI_Write(AT91S_SPI *spi, unsigned int npcs, unsigned short data) { // Discard contents of RDR register //volatile unsigned int discard = spi->SPI_RDR; /* Send data */ while ((spi->SPI_SR & AT91C_SPI_TXEMPTY) == 0); spi->SPI_TDR = data | SPI_PCS(npcs); while ((spi->SPI_SR & AT91C_SPI_TDRE) == 0); } Is there something wrong with the code above that transmits 5 bytes of data? Please note: The NPCS line num. 3 is a GPIO line (means, in PIO mode), and is not controlled by the SPI controller. I'm controlling this line by myself in the code, by de/asserting the ChipSelect#3 (NPCS3) pin when needed. The reason that I'm doing so is because that problems occurred while trying to let the SPI controller to control this pin. I didn't reset the SPI peripheral twice, because that the errata tells to reset it twice only if I perform a reset - which I don't do. Quoting the errata: If a software reset (SWRST in the SPI Control Register) is performed, the SPI may not work properly (the clock is enabled before the chip select.) Problem Fix/Workaround The SPI Control Register field, SWRST (Software Reset) needs to be written twice to be cor- rectly set. I noticed that sometimes, if I put a delay before the write to the SPI_TDR register (in SPI_Write()), then the code works perfectly and the communications succeeds. Useful links: AT91SAM7X Series Preliminary.pdf ATMEL software package/library spi.c from Atmel's library spi.h from Atmel's library An example of initializing the SPI and performing a transfer of 5 bytes is highly appreciated and helpful.

    Read the article

  • MySQL Query GROUP_CONCAT Over Multiple Rows

    - by PeteGO
    I'm getting name and address data out of generic question / answer data to create some kind of normalised reporting database. The query I've got uses group_concat and works for individual sets of questions but not for multiple sets. I've tried to simplify what I'm doing by using just forename and surname and just 3 records, 2 for 1 person and 1 for another. In reality though there are more than 300,000 records. Example of results with qs.Id = 1. QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones Example of results with qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 3 Bob,Bob,Frank Jones,Jones,Smith What I would like to see for qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones 2 Bob Jones 3 Frank Smith So how can I make the 2nd example return a separate row for each set of name and address information? I realise the current way the data is stored is "questionable" but I cannot change the way the data is stored. I can get sets of individual answers but not sure how to combine the others. My simplified Schema that I cannot change: CREATE TABLE StaticQuestion ( Id INT NOT NULL, StaticText VARCHAR(500) NOT NULL); CREATE TABLE Question ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE StaticQuestionQuestionLink ( Id INT NOT NULL, StaticQuestionId INT NOT NULL, QuestionId INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE Answer ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE QuestionSet ( Id INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE QuestionAnswerLink ( Id INT NOT NULL, QuestionSetId INT NOT NULL, QuestionId INT NOT NULL, AnswerId INT NOT NULL, StaticQuestionId INT NOT NULL); Some example data for only forename and surname. INSERT INTO StaticQuestion (Id, StaticText) VALUES (1, 'FirstName'), (2, 'LastName'); INSERT INTO Question (Id, Text) VALUES (1, 'What is your first name?'), (2, 'What is your forename?'), (3, 'What is your Surname?'); INSERT INTO StaticQuestionQuestionLink (Id, StaticQuestionId, QuestionId, DateEffective) VALUES (1, 1, 1, '2001-01-01'), (2, 1, 2, '2008-08-08'), (3, 2, 3, '2001-01-01'); INSERT INTO Answer (Id, Text) VALUES (1, 'Bob'), (2, 'Jones'), (3, 'Bob'), (4, 'Jones'), (5, 'Frank'), (6, 'Smith'); INSERT INTO QuestionSet (Id, DateEffective) VALUES (1, '2002-03-25'), (2, '2009-05-05'), (3, '2009-08-06'); INSERT INTO QuestionAnswerLink (Id, QuestionSetId, QuestionId, AnswerId, StaticQuestionId) VALUES (1, 1, 1, 1, 1), (2, 1, 3, 2, 2), (3, 2, 2, 3, 1), (4, 2, 3, 4, 2), (5, 3, 2, 5, 1), (6, 3, 3, 6, 2); Just in case SQLFiddle is down here are the 3 queries from the examples I've linked to: 1: - working query but only on 1 set of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1)) x) y 2: - working query but undesired results on multiple sets of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1, 2, 3)) x) y 3: - working query on multiple sets of data only on 1 field (answer) though. SELECT qs.Id AS QuestionSet, a.Text AS Answer FROM QuestionSet qs INNER JOIN QuestionAnswerLink qalink ON qs.Id = qalink.QuestionSetId INNER JOIN StaticQuestionQuestionLink sqqlink ON qalink.QuestionId = sqqlink.QuestionId INNER JOIN Answer a ON qalink.AnswerId = a.Id WHERE sqqlink.StaticQuestionId = 1 /* FirstName */ AND sqqlink.DateEffective = (SELECT DateEffective FROM StaticQuestionQuestionLink WHERE StaticQuestionId = 1 AND DateEffective <= qs.DateEffective ORDER BY DateEffective DESC LIMIT 1)

    Read the article

  • What precautions should you take when a senior employee leaves?

    - by Mahin
    EDIT : I agree one should check the reasons, why a senior level employee is leaving. But I am interested in knowing the official/management/technical/legal steps one should take after its decided that he is leaving, so that the life after him is smooth. What are the steps management should take when a senior programmer/team lead leaves your company. Some of them which I have thought about are : 1) If He used to manage hosting and domains stuff, change passwords of domain control panels and hosting panels. 2) If your published web sites have maintenance account and he is aware of credentials of that account then change this details also. 3) Suspend mail account for some time and forward all eMails of that account to some ex-employee account. After some time close that account. What are the other things one should check. I am expecting the answer to be a general check list one should follow. It should include both technical scenarios and management scenarios. Notable Suggestions so far : Effectively transfer the responsibilities of that employee to another one without causing any potential delay in your work. Protect your source code. If possible Make them to sign something to say that they don't have copies of source code.. You can also consider NDA here. Use the Notice Period to train his replacement. Now any new code to the project will be done by replacement with the help of Guy who is leaving. Ask him to create a document of things he thinks you should know. Make sure he checks everything in now and then any checkout will only be done by the replacement. Emails, copy off his email account to a pst.file (this assumes Outlook), Make this file available to his replacement. the employee should probably be given a chance to scrub the email. if you are going to keep his account open for whatever reason, check that no rules are created that forward incoming emails to an alternate address. Copy the hard drive of his computer to a network location and have someone senior go through and see if there are any files (drafts of performance reviews or other sensitive issues ) on it that someone else might need. Clearance from Accounts,Finance,Security,Library etc departments.Obtain all company property, laptops, keys, etc. If there is no reason not to, you should reward a departing person for their many years of service. Write them letters of recommendation (even if they already have a new job lined up).Say goodbye, and keep the door open. Make sure any outside clients know that the departing employee is not their main contact anymore. Never neglect the exit interview/debriefing. Confirm the last day of employment so that there is no misunderstanding Inform H/R if the employee is on H1B status, there is paperwork required to notify the government when an H1B employee leaves. Depending on how senior / what position, you might spend some time convincing him not to take the rest of the engineering staff with him. Make sure he spends his last days on a good note, because if he is not leaving on a good note, he can easily pollute the mind of his colleagues. Best Regards, Mahin Gupta EDIT : Now offered a bounty on it to get more detailed responses and practical suggestions.

    Read the article

  • Update twitter profile image using OAuth

    - by sjobe
    I'm trying to get twitter update_profile_image to work using OAuth. I was using curl with regular authentication and everything was working fine, but I switched to OAuth using this library, and now everything except update_profile_image works. I read something about twitter OAuth having problems with multipart data, but that was a while ago and the plugin is supposed to have dealt with that issue. My working regular authentication with curl code $url = 'http://api.twitter.com/1/account/update_profile_image.xml'; $uname = $_POST['username']; $pword = $_POST['password']; $img_path = 'xxx'; $userpwd = $uname . ':' . $pword; $img_post = array('image' => '@' . $img_path . ';type=image/jpeg', 'tile' => 'true'); $format = 'xml'; //alternative: json $message = 'Test update with a random num'.rand(); $opts = array(CURLOPT_URL => $url, CURLOPT_FOLLOWLOCATION => true, CURLOPT_RETURNTRANSFER => true, CURLOPT_HEADER => true, CURLOPT_POST => true, CURLOPT_POSTFIELDS => $img_post, CURLOPT_HTTPAUTH => CURLAUTH_ANY, CURLOPT_USERPWD => $userpwd, CURLOPT_HTTPHEADER => array('Expect:'), CURLINFO_HEADER_OUT => true); $ch = curl_init(); curl_setopt_array($ch, $opts); $response = curl_exec($ch); $err = curl_error($ch); $info = curl_getinfo($ch); curl_close($ch); My current OAuth code [I had to cut it down, so do not minor look for syntax errors] include 'EpiCurl.php'; include 'EpiOAuth.php'; include 'EpiTwitter.php'; include 'secret.php'; $twitterObj = new EpiTwitter($consumer_key, $consumer_secret); $twitterObj->setToken($_GET['oauth_token']); $token = $twitterObj->getAccessToken(); $twitterObj->setToken($token->oauth_token, $token->oauth_token_secret); try{ $img_path = 'xxx'; //$twitterObj->post_accountUpdate_profile_image(array('@image' => "@".$img_path)); $twitterObj->post('/account/update_profile_image.json', array('@image' => "@".$img_path)); $twitterObj->post_statusesUpdate(array('status' => 'This is my new status:'.rand())); //This works $twitterInfo= $twitterObj->get_accountVerify_credentials(); echo $twitterInfo->responseText; }catch(Exception $e){ echo $e->getMessage(); } I've been trying to figure this out for a while, ANY help would be greatly appreciated. I'm not in any way tied to this library, so feel free to recommend others.

    Read the article

  • Problem with custom paging in ASP.NET

    - by JohnCC
    I'm trying to add custom paging to my site using the ObjectDataSource paging. I believe I've correctly added the stored procedures I need, and brought them up through the DAL and BLL. The problem I have is that when I try to use it on a page, I get an empty datagrid. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="PageTest.aspx.cs" Inherits="developer_PageTest" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:ObjectDataSource ID="ObjectDataSource1" SelectMethod="GetMessagesPaged" EnablePaging="true" SelectCountMethod="GetMessagesCount" TypeName="MessageTable" runat="server" > <SelectParameters> <asp:Parameter Name="DeviceID" Type="Int32" DefaultValue="112" /> <asp:Parameter Name="StartDate" Type="DateTime" DefaultValue="" ConvertEmptyStringToNull="true"/> <asp:Parameter Name="EndDate" Type="DateTime" DefaultValue="" ConvertEmptyStringToNull="true"/> <asp:Parameter Name="BasicMatch" Type="Boolean" ConvertEmptyStringToNull="true" DefaultValue="" /> <asp:Parameter Name="ContainsPosition" Type="Boolean" ConvertEmptyStringToNull="true" DefaultValue="" /> <asp:Parameter Name="Decoded" Type="Boolean" ConvertEmptyStringToNull="true" DefaultValue="" /> <%-- <asp:Parameter Name="StartRowIndex" Type="Int32" DefaultValue="10" /> <asp:Parameter Name="MaximumRows" Type="Int32" DefaultValue="10" /> --%> </SelectParameters> </asp:ObjectDataSource> <asp:GridView ID="GridView1" runat="server" DataSourceID="ObjectDataSource1" AllowPaging="true" PageSize="10"></asp:GridView> <br /> <asp:Label runat="server" ID="lblCount"></asp:Label> </div> </form> </body> </html> When I set EnablePaging to false on the ODS, and add the commented out StartRowIndex and MaximumRows params in the markup, I get data so it really seems like the data layer is behaving as it should. There's code in code file to put the value of the GetMessagesCount call in the lblCount, and that always has a sensible value in it. I've tried breaking in the BLL and stepping through, and the backend is getting called, and it is returning what looks like the right information and data, but somehow between the ODS and the GridView it's vanishing. I created a mock data source which returned numbered rows of random numbers and attached it to this form, and the custom paging worked so I think my understanding of the technique is good. I just can't see why it fails here! Any help really appreciated. (EDIT .. here's the code behind). using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.ComponentModel; public partial class developer_PageTest : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { lblCount.Text = String.Format("Count = {0}", MessageTable.GetMessagesCount(112, null, null, null, null, null)) } }

    Read the article

  • Does an ESEUTIL defrag of an Exchange store also perform an integrity check/repair on it?

    - by Bigbio2002
    Earlier this morning, store.exe fuzzled up in one way or another, which necessitated a restart of our Exchange server. It came back online with no errors or problems, all the transaction logs replayed successfully, and all the stores mounted as normal. To me, it was just one of those random crashes; however, our consultant suspects it was caused by corruption in one of the stores. Perhaps he's correct, since he has far more experience than me, but that's not the point. To fix the suspected errors, he's planinng to run an ESEUTIL defrag (via PerfectDisk) to fix them, which he claims will also fix any errors present. From what I understand, defrag, verify, and repair are 3 separate actions, and a defrag does not imply any kind of integrity check. Is this correct? Are there any dangers of running a straight-up defrag on a database that might be corrupt? Edit: Here's the first error in the event log, which indicated the start of the problems we were having. Anyone know what it might indicate? Event Type: Error Event Source: Microsoft Exchange Server Event Category: None Event ID: 1000 Date: 11/23/2011 Time: 8:15:47 AM User: N/A Computer: SERVER Description: Faulting application exsp.dll, version 6.5.7638.1, stamp 430e735b, faulting module kernel32.dll, version 5.2.3790.4480, stamp 49c51f0a, debug? 0, fault address 0x0000bef7. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 41 00 70 00 70 00 6c 00 A.p.p.l. 0008: 69 00 63 00 61 00 74 00 i.c.a.t. 0010: 69 00 6f 00 6e 00 20 00 i.o.n. . 0018: 46 00 61 00 69 00 6c 00 F.a.i.l. 0020: 75 00 72 00 65 00 20 00 u.r.e. . 0028: 20 00 65 00 78 00 73 00 .e.x.s. 0030: 70 00 2e 00 64 00 6c 00 p...d.l. 0038: 6c 00 20 00 36 00 2e 00 l. .6... 0040: 35 00 2e 00 37 00 36 00 5...7.6. 0048: 33 00 38 00 2e 00 31 00 3.8...1. 0050: 20 00 34 00 33 00 30 00 .4.3.0. 0058: 65 00 37 00 33 00 35 00 e.7.3.5. 0060: 62 00 20 00 69 00 6e 00 b. .i.n. 0068: 20 00 6b 00 65 00 72 00 .k.e.r. 0070: 6e 00 65 00 6c 00 33 00 n.e.l.3. 0078: 32 00 2e 00 64 00 6c 00 2...d.l. 0080: 6c 00 20 00 35 00 2e 00 l. .5... 0088: 32 00 2e 00 33 00 37 00 2...3.7. 0090: 39 00 30 00 2e 00 34 00 9.0...4. 0098: 34 00 38 00 30 00 20 00 4.8.0. . 00a0: 34 00 39 00 63 00 35 00 4.9.c.5. 00a8: 31 00 66 00 30 00 61 00 1.f.0.a. 00b0: 20 00 66 00 44 00 65 00 .f.D.e. 00b8: 62 00 75 00 67 00 20 00 b.u.g. . 00c0: 30 00 20 00 61 00 74 00 0. .a.t. 00c8: 20 00 6f 00 66 00 66 00 .o.f.f. 00d0: 73 00 65 00 74 00 20 00 s.e.t. . 00d8: 30 00 30 00 30 00 30 00 0.0.0.0. 00e0: 62 00 65 00 66 00 37 00 b.e.f.7. 00e8: 0d 00 0a 00 ....

    Read the article

  • WinQual: Why would WER not accept code-signing certificates?

    - by Ian Boyd
    In 2005 i tried to establish a WinQual account with Microsoft, so i could pick up our (if any) crash dump files submitted automatically through Windows Error Reporting (WER). i was not allowed to have my crash dumps, because i don't have a Verisign certificate. Instead i have a cheaper one, generated by a Verisign subsidiary: Thawte. The method in which you join is: you digitally sign a sample exe they provide. This proves that you are the same signer that signed apps that they got crash dumps from in the wild. Cryptographically, the private key is needed to generate a digital signature on an executable. Only the holder of that private key can create a signature with for the matching public key. It doesn't matter who generated that private key. That includes certificates that are generated from: self-signing Wells Fargo DigiCert SecureTrust Trustware QuoVadis GoDaddy Entrust Cybertrust GeoTrust GlobalSign Comodo Thawte Verisign Yet Microsof's WinQual only accepts digital certificates generated by Verisign. Not even Verisign's subsidiaries are good enough (Thawte). Can anyone think of any technical, legal or ethical reason why Microsoft doesn't want to accept code-signing certificates? The WinQual site says: Why Is a Digital Certificate Required for Winqual Membership? A digital certificate helps protect your company from individuals who seek to impersonate members of your staff or who would otherwise commit acts of fraud against your company. Using a digital certificate enables proof of an identity for a user or an organization. Is somehow a Thawte digital certificate not secure? Two years later, i sent a reminder notice to WinQual that i've been waiting to be able to get at my crash dumps. The response from WinQual team was: Hello, Thanks for the reminder. We have notified the appropriate people that this is still a request. In 2008 i asked this question in a Microsoft support forum, and the response was: We are only setup to accept VeriSign Certificates at this point. We have not had an overwhelming demand to support other types of certificates. What can it possibly mean to not be "setup" to accept other kinds of certificates? If the thumbprint of the key that signed the WinQual.exe test app is the same as the thumbprint that signed the executable who's crash dump you got in the wild: it is proven - they are my crash dumps, give them to me. And it's not like there's a special API to check if a Verisign digital signature is valid, as opposed to all other digital signatures. A valid signature is valid no matter who generated the key. Microsoft is free to not trust the signer, but that's not the same as identity. So that is my question, can anyone think of any practical reason why WinQual isn't setup to support digital signatures? One person theorized that the answer is that they're just lazy: Not that I know but I would assume that the team running the winQual system is a live team and not a dev team - as in, personality and skillset geared towards maintenance of existing systems. I could be wrong though. They don't want to do work to change it. But can anyone think of anything that would need to be changed? It's the same logic no matter what generated the key: "does the thumbprint match". What am i missing?

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >