Search Results

Search found 36922 results on 1477 pages for 'custom post tags'.

Page 99/1477 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Can this Query can be corrected or different table structure needed? (question is clear, detailed, d

    - by sandeepan
    This is a bit lengthy but I have provided sufficient details and kept things very clear. Please see if you can help. (I will surely accept answer if it solves my problem) I am sure a person experienced with this can surely help or suggest me to decide the tables structure. About the system:- There are tutors who create classes A tags based search approach is being followed Tag relations are created/edited when new tutors registers/edits profile data and when tutors create classes (this makes tutors and classes searcheable).For simplicity, let us consider only tutor name and class name are the fields which are matched against search keywords. In this example, I am considering - tutor "Sandeepan Nath" has created a class called "first class" tutor "Bob Cratchit" has created a class called "new class" Desired search results- AND logic to be appied on the search keywords and match against class and tutor data(class name + tutor name), in other words, All those classes be shown such that all the search terms are present in the class name or its tutor name. Example to be clear - Searching "first class" returns class with id_wc = 1. Working Searching "Sandeepan class" should also return class with id_wc = 1. Not working in System 2. Problem with profile editing and searching To tell in one sentence, I am facing a conflict between the ease of profile edition (edition of tag relations when tutor profiles are edited) and the ease of search logic. In the beginning, we had one table structure and search was easy but tag edition logic was very clumsy and unmaintainable(Check System 1 in the section below) . So we created separate tag relations tables to make profile edition simpler but search has become difficult. Please dump the tables so that you can run the search query I have given below and see the results. System 1 (previous system - search easy - profile edition difficult):- Only one table called All_Tag_Relations table had the all the tag relations. The tags table below is common to both systems 1 and 2. CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 1, 1, 1), (4, 2, 1, 1), (5, 3, 1, 1), (6, 4, 1, 1), (7, 6, 2, NULL), (8, 7, 2, NULL), (9, 6, 2, 2), (10, 7, 2, 2), (11, 5, 2, 2), (12, 4, 2, 2); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); Please note that for every class, the tag rels of its tutor have to be duplicated. Example, for class with id_wc=1, the tag rel records with id_tag_rel = 3 and 4 are actually extras if you compare with the tag rel records with id_tag_rel = 1 and 2. System 2 (present system - profile edition easy, search difficult) Two separate tables Tutors_Tag_Relations and Webclasses_Tag_Relations have the corresponding tag relations data (Please dump into a separate database)- CREATE TABLE IF NOT EXISTS `tutors_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `tutors_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`) VALUES (1, 1, 1), (2, 2, 1), (3, 6, 2), (4, 7, 2); CREATE TABLE IF NOT EXISTS `webclasses_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `webclasses_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `webclasses_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 3, 1, 1), (2, 4, 1, 1), (3, 5, 2, 2), (4, 4, 2, 2); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; insert into All_Tag_Relations select NULL,id_tag,id_tutor,NULL from Tutors_Tag_Relations; insert into All_Tag_Relations select NULL,id_tag,id_tutor,id_wc from Webclasses_Tag_Relations; Here you can see how easily tutor first name can be edited only in one place. But search has become really difficult, so on being advised to use a Temporary table, I am creating one at every search request, then dumping all the necessary data and then searching from it, I am creating this All_Tag_Relations table at search run time. Here I am just dumping all the data from the two tables Tutors_Tag_Relations and Webclasses_Tag_Relations. But, I am still not able to get classes if I search with tutor name This is the query which searches "first class". Running them on both the systems shows correct results (returns the class with id_wc = 1). SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 But, searching for "Sandeepan class" works only with the 1st system Here is the query which searches "Sandeepan class" SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =1)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =1 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 Can anybody alter this query and somehow do a proper join or something to get correct results. That solves my problem in a nice way. As you can figure out, the reason why it does not work in system 2 is that in system 1, for every class, one additional tag relation linking class and tutor name is present. e.g. for class first class, (records with id_tag_rel 3 and 4) which returns the class on searching with tutor name. So, you see the trade-off between the search and profile edition difficulty with the two systems. How do I overcome both. I have to reach a conclusion soon. So far my reasoning is it is definitely not good from a code maintainability point of view to follow the single tag rel table structure of system one, because in a real system while editing a field like "tutor qualifications", there can be as many records in tag rels table as there are words in qualification of a tutor (one word in a field = one tag relation). Now suppose a tutor has 100 classes. When he edits his qualification, all the tag rel rows corresponding to him are deleted and then as many copies are to be created (as per the new qualification data) as there are classes. This becomes particularly difficult if later more searcheable fields are added. The code cannot be robust. Is the best solution to follow system 2 (edition has to be in one table - no extra work for each and every class) and somehow re-create the all_tag_relations table like system 1 (from the tables tutor_tag_relations and webclasses_tag_relations), creating the extra tutor tag rels for each and every class by a tutor (which is currently missing in system 2's temporary all_tag_relations table). That would be a time consuming logic script. I doubt that table can be recreated without resorting to PHP sript (mysql alone cannot do that). But the problem is that running all this at search time will make search definitely slow. So, how do such systems work? How are such situations handled? I thought about we can run a cron which initiates that PHP script, say every 1 minute and replaces the existing all_tag_relations table as per new tag rels from tutor_tag_relations and webclasses_tag_relations (replaces means creates a new table, deletes the original and renames the new one as all_tag_relations, otherwise search won't work during that period- or is there any better way to that?). Anyway, the result would be that any changes by tutors will reflect in search in the next 1 minute and not immediately. An alternateve would be to initate that PHP script every time a tutor edits his profile. But here again, since many users may edit their profiles concurrently, will the creation of so many tables be a burden and can mysql make the server slow? Any help would be appreciated and working solution will be accepted as answer. Thanks, Sandeepan

    Read the article

  • Javascript + PHP $_POST array empty

    - by Peterim
    While trying to send a POST request via xmlhttp.open("POST", "url", true) (javascript) to the server I get an empty $_POST array. Firebug shows that the data is being sent. Here is the data string from Firebug: a=1&q=151a45a150.... But $_POST['q'] returns nothing. The interesting thing is that file_get_contents('php://input') does have my data (the string above), but PHP somehow doesn't recognize it. Tried both $_POST and $_REQUEST, nothing works. Headers being sent: POST /test.php HTTP/1.1 Host: website.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://website.com/ Content-Length: 156 Content-Type: text/plain; charset=UTF-8 Pragma: no-cache Cache-Control: no-cache Thank you for any suggestions.

    Read the article

  • PHP question about global variables and form requests

    - by user220201
    Hi, This is probably a stupid question but will ask anyway sine I have no idea. I have written basic php code which serve forms. Say I have a login page and I serve it using the login.php page and it will be called in the login.html page like this - <form action="login.php" method="post"> By this it is also implied that every POST needs its own php file, doesn't it? This kind of feels weird. Is there a way to have a single file, say code.php, and just have each of the forms as functions instead? EDIT: Specifically, say I have 5 forms that are used one after the other in my application. Say after login the user does A, B, C and D tasks each of which are sent to the server as a POST request. So instead of having A.php, B.php, C.php and D.php I would like to have a single code.php and have A(), B(), C() and D() as functions. Is there a way to do this? Also on the same note, how do I deal with say a global array (e.g. an array of currently logged in users) across multiple forms? I want to do this without writing to a DB. I know its probably better to write to a DB and query but is it even possible to do it with a global array? The reason I was thinking about having all the form functions in one file is to use a global array. Thanks, - Pav

    Read the article

  • Why doesn't access database update?

    - by Ryan
    Recently I met a strange problem, see code snips as below: var sqlCommand: string; connection: TADOConnection; qry: TADOQuery; begin connection := TADOConnection.Create(nil); try connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False'; connection.Open(); qry := TADOQuery.Create(nil); try qry.Connection := connection; qry.SQL.Text := 'Select * from aaa'; qry.Open; qry.Append; qry.FieldByName('TestField1').AsString := 'test'; qry.Post; beep; finally qry.Free; end; finally connection.Free; end; end; First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1. We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed. who can explain this problem , it troubles me so long time. thanks !!! BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7

    Read the article

  • Invoking code both before and after WebControl.Render method

    - by Dirk
    I have a set of custom ASP.NET server controls, most of which derive from CompositeControl. I want to implement a uniform look for "required" fields across all control types by wrapping each control in a specific piece of HTML/CSS markup. For example: <div class="requiredInputContainer"> ...custom control markup... </div> I'd love to abstract this behavior in such a way as to avoid having to do something ugly like this in every custom control, present and future: public class MyServerControl : TextBox, IRequirableField { public IRequirableField.IsRequired {get;set;} protected override void Render(HtmlTextWriter writer){ RequiredFieldHelper.RenderBeginTag(this, writer) //render custom control markup RequiredFieldHelper.RenderEndTag(this, writer) } } public static class RequiredFieldHelper{ public static void RenderBeginTag(IRequirableField field, HtmlTextWriter writer){ //check field.IsRequired, render based on its values } public static void RenderEndTag(IRequirableField field, HtmlTextWriter writer){ //check field.IsRequired , render based on its values } } If I was deriving all of my custom controls from the same base class, I could conceivably use Template Method to enforce the before/after behavior;but I have several base classes and I'd rather not end up with really a convoluted class hierarchy anyway. It feels like I should be able to design something more elegant (i.e. adheres to DRY and OCP) by leveraging the functional aspects of C#, but I'm drawing a blank.

    Read the article

  • Codeigniter - change url at method call

    - by NemoPS
    I was wondering if the following can be done in codeigniter. Let's assume I have a file, called Post.php, used to manage posts in an admin interface. It has several methods, such as index (lists all posts), add, update, delete... Now, I access the add method, so that the url becomes /posts/add And I add some data. I click "save" to add the new post. It calls the same method with an if statement like "if "this-input-post('addnew')"" is passed, call the model, add it to the database Here follows the problem: If everything worked fine, it goes to the index with the list of all posts, and displays a confirmation BUT No the url would still be posts/add, since I called the function like $this-index() after verifying data was added. I cannot redirect it to "posts/" since in that case no confirmation message would be shown! So my question is: can i call a method from anther one in the same class, and have the url set to that method (/posts/index instead of /posts/add)? It's kinda confusing, but i hope i gave you enough info to spot the problem Cheers!

    Read the article

  • How can data templates in generic.xaml get applied automatically?

    - by Thiado de Arruda
    I have a custom control that has a ContentPresenter that will have an arbitrary object set as it content. This object does not have any constraint on its type, so I want this control to display its content based on any data templates defined by application or by data templates defined in Generic.xaml. If in a application I define some data template(without a key because I want it to be applied automatically to objects of that type) and I use the custom control bound to an object of that type, the data template gets applied automatically. But I have some data templates defined for some types in the generic.xaml where I define the custom control style, and these templates are not getting applied automatically. Here is the generic.xaml : If I set an object of type 'PredefinedType' as the content in the contentpresenter, the data template does not get applied. However, If it works if I define the data template in the app.xaml for the application thats using the custom control. Does someone got a clue? I really cant assume that the user of the control will define this data template, so I need some way to tie it up with the custom control.

    Read the article

  • Passing $_GET or $_POST data to PHP script that is run with wget

    - by Matt
    Hello, I have the following line of PHP code which works great: exec( 'wget http://www.mydomain.com/u1.php /dev/null &' ); u1.php acts to do various types of maintenance on my server and the above command makes it happen in the background. No problems there. But I need to pass variable data to u1.php before it's executed. I'd like to pass POST data preferably, but could accommodate GET or SESSION data if POST isn't an option. Basically the type of data being passed is user-specific and will vary depending on who is logged in to the site and triggering the above code. I've tried adding the GET data to the end of the URL and that didn't work. So how else might I be able to send the data to u1.php? POST data preferred, SESSION data would work as well (but I tried this and it didn't pick up the logged in user's session data). GET would be a last resort. Thanks!

    Read the article

  • $this.attr() stops Jquery dialog from opening

    - by user342391
    I am using the following code to post my form data to player/index.php and open it in a dialog. Because I have multiple of these forms in my table I need to use $(this). But now it doesn't open in a dialog. New code (doesn't open dialog but display data in url): $("#recordingdialog").dialog({ //other options, width, height, etc... modal: true, bgiframe: true, autoOpen: false, height: 200, width: 350, draggable: true, resizeable: true, title: "Play Recording",}); $(this).click(function() { $.post('player/index.php', $(this).attr('form').serialize(), function (data) { $("#recordingdialog").html(data).dialog("open"); }); return false; }); Old code (only works on one form): $("#recordingdialog").dialog({ //other options, width, height, etc... modal: true, bgiframe: true, autoOpen: false, height: 550, width: 550, draggable: true, resizeable: true, title: "Play Recording",}); $("#wavajax button").click(function() { $.post('player/index.php', $("#wavajax").serialize(), function (data) { $("#recordingdialog").html(data).dialog("open"); }); return false; });

    Read the article

  • IIS 7.5 Custom HTTP Response Headers Not Working

    - by Craig
    Trying to setup custom HTTP Response Headers on a new install of IIS7.5 on Windows Server 2008 R2 Standard and they are not working. Default headers work fine (X-Powered-By, etc...). Modifying default header values work (ie. change X-Powered-By to ASP.NET2). Modifying default header names cause header to stop being output (ie. Change X-Powered-By to X-Powered-By2). Site in question is a test site with a single html page. Custom headers also don't work on ASP.NET 2.0 site. I've tried setting the headers at the global level and at the site level to no effect.

    Read the article

  • Nginx upload PUT and POST

    - by w00t
    I am trying to make nginx accept POST and PUT methods to upload files. I have compiled nginx_upload_module-2.2.0. I can't find any how to. I simply want to use only nginx for this, no reverse proxy, no other backend and no php. Is this achievable? this is my conf: nginx version: nginx/1.2.3TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g' --add-module=/usr/src/nginx-1.2.3/nginx_upload_module-2.2.0 server { listen 80; server_name example.com; location / { root /html; autoindex on; } location /upload { root /html; autoindex on; upload_store /html/upload 1; upload_set_form_field $upload_field_name.name "$upload_file_name"; upload_set_form_field $upload_field_name.content_type "$upload_content_type"; upload_set_form_field $upload_field_name.path "$upload_tmp_path"; upload_aggregate_form_field "$upload_field_name.md5" "$upload_file_md5"; upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size"; upload_pass_form_field "^submit$|^description$"; upload_cleanup 400 404 499 500-505; } } And as an upload form I'm trying to use the one listed at the end of this page: http://grid.net.ru/nginx/upload.en.html

    Read the article

  • VSS Post Backup failures for Virtual Server 2005 R2 SP1 virtual machines

    - by califguy4christ
    We've been seeing strange errors with Volume Shadow Copy services on our Virtual Server 2005 R2 SP1 host. It appears to be failing on a strange mountpoint in the C:\WINDOWS\Temp\ folders, which I believe is used by VSS to mount a writeable image file. To summarize: The Microsoft Virtual Server 2005 Writer continually goes into a failed retryable state The Virtual Server log reports errors during the Post Backup phase VSS reports errors backing up a mount point of unknown origins The mount point causes NTFS and ftdisk errors The host is x86 Windows Server 2003 Standard, SP2. The virtual machine is the same. Both use basic disks. Here is the writer state: Writer name: 'Microsoft Virtual Server 2005 Writer' Writer Id: {76afb926-87ad-4a20-a50f-cdc69412ddfc} Writer Instance Id: {78df98e2-bf19-4804-890b-15865efef3bd} State: [11] Failed Last error: Retryable error From the Virtual Server log: Virtual Server - Vss Writer - Event ID: 1035: The VSS writer for Virtual Server failed during the PostBackup phase. The guest shadow copies did not get exposed on the host machine, after mounting all the virtual hard disks of the virtual machine VMACHINE. From the Application log: VSS - None - Event ID: 12290: Volume Shadow Copy Service warning: GetVolumeInformationW( \\?\Volume{fb84bae7-87f5-11dd-9832-001cc4961ca6}\,NULL,0, NULL,NULL,[0x00000000], , 260) == 0x0000045d. hr = 0x00000000. From the System log: Ntfs - Disk - Event ID: 55: The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume C:\WINDOWS\Temp\ {fb84bae7-87f5-11dd-9832-001cc49.... My current theory is that VSS creates a mount point for an image file of the VHD, then the software panics for some reason, leaving everything in an inconsistent state. Removing the mount point doesn't resolve the problem. All of the other disks check out fine with CHKDSK. There's no exclusion option for VHDs or to turn off online backups. Has anyone seen this kind of thing before or point me in the right direction for getting more information about the mount point and it's origins? I haven't been able to trace what application is creating that mount point.

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -evv jre-6u20-linux-i586.rpm D: opening db environment /var/lib/rpm/Packages joinenv D: opening db index /var/lib/rpm/Packages rdonly mode=0x0 D: locked db index /var/lib/rpm/Packages D: opening db index /var/lib/rpm/Name rdonly mode=0x0 error: package jre-6u20-linux-i586.rpm is not installed D: closed db index /var/lib/rpm/Name D: closed db index /var/lib/rpm/Packages D: closed db environment /var/lib/rpm/Packages [root@host java]# rpm -qi --scripts jre-6u20-linux-i586.rpm package jre-6u20-linux-i586.rpm is not installed [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • Building new computer, turns on, but no post

    - by addybojangles
    Pardon my ignorance here, finally decided to put together a computer and egads. I purchased a new motherboard, power supply, processor, video card and memory. ASUS M4A79XTD EVO AM3 AMD 790X ATX AMD Motherboard OCZ Fatal1ty OCZ550FTY 550W ATX12V v2.2 / EPS12V SLI Ready 80 PLUS Certified Modular Active PFC Power Supply AMD Phenom II X4 965 Black Edition Deneb 3.4GHz 4 x 512KB L2 Cache 6MB L3 Cache Socket AM3 125W Quad-Core Processor XFX HD-577A-ZNFC Radeon HD 5770 (Juniper XT) 1GB 128-bit GDDR5 PCI Express 2.0 x16 HDCP Ready CrossFireX Support Video Card G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Dual Channel Kit Desktop Memory Model F3-12800CL9D-4GBNQ (originally had links for you guys, but I lack the rep, sorry!!) And I've got it all in the tower. I put in power supply, installed processor on motherboard, installed heatsink, put in ram, and I am using an older IDE hard disk. When I start the computer, the monitor tells me "check signal cable." As far as I can tell, the heatsink on the processor is spinning, the power supply is on (obviously), and the green LED on the motherboard is on. I originally only had the bigger output plugged in to the motherboard (what I saw in a YouTube vid as well as the mobo instructions), but after doing some research, it said plug in the other ATX power supply. Which I did. And trying to power the computer results in nothing. No beeps on startup, no post, anyone have any ideas? Your ideas and help is greatly appreciated.

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • CentOS 6 - Make system aware of custom lib paths and missing base links

    - by Mike Purcell
    I am trying to compile libmemcached (1.0.7) on CentOS6, and keep getting the following warning: ... checking for event.h... no configure: WARNING: Unable to find libevent ... I manually compiled libevent (2.0.19) and built it using the following configure line: OPTIONS="--prefix=/usr/local/_custom/app/libevent" Everything compiled and installed fine, but I couldn't figure out how to make the system aware that the lib files are in the custom /usr/local/_custom/app/libevent/libdir. I stumbled upon an article and read that I can make the system aware of custom lib paths by adding a custom file to /etc/ld.so.conf.d/ directory: # /etc/ld.so.conf.d/customApp.conf /usr/local/_custom/app/libevent/lib Then I issued the ldconfig command and was able to confirm that libevent was included by issuing this command: ldconfig -p | ack -i libevent Seeing that libevent was now included in the ldconfig output, I figured I would be able to compile libmemcached and satisfy the aforementioned warning. Unfortunately it did not. So I took another look at the ldconfig output and noticed this: libevent_pthreads-2.0.so.5 (libc6,x86-64) => /usr/local/_custom/app/libevent/lib/libevent_pthreads-2.0.so.5 libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/local/_custom/app/libevent/lib/libevent_openssl-2.0.so.5 libevent_extra-2.0.so.5 (libc6,x86-64) => /usr/local/_custom/app/libevent/lib/libevent_extra-2.0.so.5 libevent_core-2.0.so.5 (libc6,x86-64) => /usr/local/_custom/app/libevent/lib/libevent_core-2.0.so.5 libevent-2.0.so.5 (libc6,x86-64) => /usr/local/_custom/app/libevent/lib/libevent-2.0.so.5 There are no references to the base links, for example, I would expect to see links to these (ls -la /usr/local/_custom/app/libevent/lib): libevent.so -> libevent-2.0.so.5.1.7 libevent_openssl.so -> libevent_openssl-2.0.so.5.1.7 libevent_core.so -> libevent_core-2.0.so.5.1.7 So either I am doing something wrong, or the system still does not know where to look to find libevent.so. -- Update #1 -- I wasn't able to get libmemcached to compile without the warning notice, even after trying to compile using the following configure command: ./configure --prefix=/usr/local/_custom/app/libmemcached CFLAGS="-I/usr/local/_custom/app/libevent/include" LDFLAGS="-L/usr/local/_custom/app/libevent/lib" I thought for sure this would work because I am directly passing the include and lib directories to the configure command. But it did not.

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -qi jre Name : jre Relocations: /usr/java Version : 1.6.0_20 Vendor: Sun Microsystems, Inc. Release : fcs Build Date: Mon Apr 12 19:34:13 2010 Install Date: Thu May 6 06:36:17 2010 Build Host: jdk-lin-1586 Group : Development/Tools Source RPM: jre-1.6.0_20-fcs.src.rpm Size : 50708634 License: Sun Microsystems Binary Code License (BCL) Signature : (none) Packager : Java Software <[email protected]> URL : http://java.sun.com/ Summary : Java(TM) Platform Standard Edition Runtime Environment Description : The Java Platform Standard Edition Runtime Environment (JRE) contains everything necessary to run applets and applications designed for the Java platform. This includes the Java virtual machine, plus the Java platform classes and supporting files. The JRE is freely redistributable, per the terms of the included license. [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • Excel 2007 Constantly Creating Custom Cell Styles

    - by Nick
    Hello, I've been using Office 2007 for a few months now and have noticed that it was creating duplicate custom cell styles on its own, like Normal 2, Normal 3, etc. It didn't really bother me at first, but now Excel will lag when I open the cell styles menu as it gathers well over a hundred of these duplicates (I have seen Normal 54 and Normal 5 2 2 so I'm unsure as to how many there actually are). I have also just checked a fresh Excel sheet, and it only has the defaults, but one I created earlier today from scratch has the Normal 54 listed. My questions are: Why is this happening? Can I delete a temp or custom settings file somewhere to clear this? Any help on this is appreciated.

    Read the article

  • Specifying prerequisites for Puppet custom facts?

    - by larsks
    I have written a custom Puppet fact that requires the biosdevname tool to be installed. I'm not sure how to set things up correctly such that this tool will be installed before facter tries to instantiate the custom fact. Facts are loaded early on in the process, so I can't simply put a package { biosdevname: ensure => installed } in the manifest, since by the time Puppet gets this far the custom fact has already failed. I was curious if I could resolve this through Puppet's run stages. I tried: stage { pre: before => Stage[main] } class { biosdevname: stage => pre } And: class biosdevname { package { biosdevname: ensure => installed } } But this doesn't work...Puppet loads facts before entering the pre stage: info: Loading facts in physical_network_config ./physical_network_config.rb:33: command not found: biosdevname -i eth0 info: Applying configuration version '1320248045' notice: /Stage[pre]/Biosdevname/Package[biosdevname]/ensure: created Etc. Is there any way to make this work? EDIT: I should make it clear that I understand, given a suitable package declaration, that the fact will run correctly on subsequent runs. The difficulty here is that this is part of our initial configuration process. We're running Puppet out of kickstart and want the network configuration to be in place before the first reboot. It sounds like the only workable solution is to simply run Puppet twice during the initial system configuration, which will ensure that the necessary packages are in place. Also, for Zoredache: # This produces a fact called physical_network_config that describes # the number of NICs available on the motherboard, on PCI bus 1, and on # PCI bus 2. The fact value is of the form <x>-<y>-<z>, where <x> # is the number of embedded interfaces, <y> is the number of interfaces # on PCI bus 1, and <z> is the number of interfaces on PCI bus 2. em = 0 pci1 = 0 pci2 = 0 Dir['/sys/class/net/*'].each { |file| devname=File.basename(file) biosname=%x[biosdevname -i #{devname}] case when biosname.match('^pci1') pci1 += 1 when biosname.match('^pci2') pci2 += 1 when biosname.match('^em[0-9]') em += 1 end } Facter.add(:physical_network_config) do setcode do "#{em}-#{pci1}-#{pci2}" end end

    Read the article

  • custom adm file for IE search providers on Windows 2003

    - by filipv
    Hi, I have been stumped by this issue for some time, I created a custom ADM template for a customer to populate the search providers in Internet Explorer 7 and 8. The custom ADM works fine and I have set 3 search providers, the problem is that I cannot change them (the customer wants to change the entry for wikipedia from EN to NL), I edited the ADM file but the clients seem to keep on using the old settings. Removing/replacing the adm file has no effect either, the settings remain. I used the following article as a base: http://support.microsoft.com/kb/918238 For IE8 you need to work with a name instead of a UID for the default entry, that works as expected. but there seems to be no way to change the setting once in place.

    Read the article

  • Require a very simple bash-based webserver for logging XML POST [on hold]

    - by Syffys
    As in title, it's for testing purpose and I need it to be extremely light (1 line to 1 single light file). Here is a XML query sample: XML_QUERY=$(cat <<EOF <?xml version='1.0' encoding='UTF-8'?> <Test></Test> EOF ) curl -H "Content-type: text/xml; charset=utf-8" -H "Soapaction: \"\"" -k -d "${XML_QUERY}" http://localhost:8088 Here are some of the tracks I have found so far even if I wasnt able to adapt them to work as I expect: Netcat minimal webserver: Problem is that my nc does not have the -q option, so the connection is closing before delivering the XML content Netcat Only webserver: Same as above Thanks in advance! EDIT: As it's been asked, I'm running Linux Redhat, even if the distro does not really matter and the OS implied since I'm asking a bash-based solution... Also about my topic being on hold: "Instead, describe your situation and the specific problem you're trying to solve" = I though this was exactly what I was doing, but ok I'll reword: My situation: bash environment (which can also include some standard linux tool: netcat, python or whatever) My specific problem: please see title: Require a very simple bash-based webserver for logging XML in HTTP POST for testing purpose

    Read the article

  • Using Apache Environment Variables to set custom ErrorDocument

    - by Tad
    I've got a set of RewriteCond rules that test for various mobile devices and then set environment variables like "env=device:.iphone" or "env=device:.smartphone" if the useragent matches an iPhone or Android device. I'm trying to now redirect the user to custom-styled 404/500 server error pages for each device, by way of the error pages. Ideally I'd like to be able to test for a variable being there, and then write in a custom ErrorDocument string. But an apache doesn't seem to work in this case. Any ideas how I can construct if/else tests in an apache conf file for environment vars?

    Read the article

  • Google Chrome custom search engine for secure Wikipedia

    - by gdejohn
    I have this custom search engine set up in Google Chrome: https://encrypted.google.com/search?q=site%3Aen.wikipedia.org+%s&btnI=745 It searches Google for site:en.wikipedia.org {query}, and the btnI=745 is for I'm Feeling Lucky, so it automatically redirects to the first result. I like this better than using Wikipedia's search function directly because it gives me very effective approximate string matching, so I can misspell my search, or leave a word out, or just search for some keywords, and I still get what I'm looking for right away. What I'd like is for it to use Wikipedia's secure gateway: https://secure.wikimedia.org/wikipedia/en/wiki/ It's easy enough to set up a custom search engine that uses the secure version of Wikipedia's search function directly, but I can't figure out how to correctly incorporate it into my version going through Google. Nothing I've tried works.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >