Search Results

Search found 4671 results on 187 pages for 'universal binary'.

Page 112/187 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Single file changed: intrusion or corruption?

    - by Michaël Witrant
    rkhunter reported a single file change on a virtual server (netstat binary). It didn't report any other warning. The change was not the result of a package upgrade (I reinstalled it and the checksum is back as it was before). I'm wondering whether this is a file corruption or an intrusion. I guess an intrusion would have changed many other files watched by rkhunter (or none if the intruder had access to rkhunter's database). I disassembled both binaries with objdump -d and stored the diff here: https://gist.github.com/3972886 The full dump diff generated with objdump -s is here : https://gist.github.com/3972937 I guess a file corruption would have changed either large blocks or single bits, not small blocks like this. Do these changes look suspicious? How could I investigate more? The system is running Debian Squeeze.

    Read the article

  • PHP get "Application Root" in Xampp on Windows correctly

    - by Michael Mao
    Hi all: I found this thread on StackOverflow about how to get the "Application Root" from inside my web app. However, most of the approaches suggested in that thread can hardly be applied to my Xampp on Windows. Say, I've got a "common.php" which stays inside my web app's app directory: / /app/common.php /protected/index.php In my common.php, what I've got is like this: define('DOCROOT', $_SERVER['DOCUMENT_ROOT']); define('ABSPATH', dirname(__FILE__)); define('COMMONSCRIPT', $_SERVER['PHP_SELF']); After I required the common.php inside the /protected/index.php, I found this: C:/xampp/htdocs //<== echo DOCROOT; C:\xampp\htdocs\comic\app //<== echo ABSPATH /comic/protected/index.php //<== echo COMMONSCRIPT So the most troublesome part is the path delimiters are not universal, plus, it seems all superglobals from the $_SERVER[] asso array, such as $_SERVER['PHP_SELF'], are relative to the "caller" script, not the "callee" script. It seems that I can only rely on dirname(__FILE__) to make sure this always returns an absolute path to the common.php file. I can certainly parse the returning values from DOCROOT and ABSPATH and then calculate the correct "application root". For instance, I can compare the parts after htdocs and substitute all backslashes with slashes to get a unix-like path I wonder is this the right way to handle this? What if I deploy my web app on a LAMP environment? would this environment-dependent approach bomb me out? I have used some PHP frameworks such as CakePHP and CodeIgniter, to be frank, They just work on either LAMP or WAMP, but I am not sure how they approached such a elegant solution. Many thanks in advance for all the hints and suggestions.

    Read the article

  • How to protect copyright on large web applications?

    - by Saif Bechan
    Recently I have read about Myows, described as "the universal copyright management and protection app for smart creatives". It is used to protect copyright and more. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so new. Is protecting copyright a good idea for a large web application? If so, do you think Myows will be worth using, or are there better ways to achieve that? Update: Wow, there is no better person to have answered this question like the creator himself. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • Add new types to Go

    - by nevalu
    I'm trying add new types for that been managed/used as in Go core types. To create new types is anything very interesting to validate data before of send it to a non-SQL DBMS or to check data from a form. Go uses univeral constants to define them at global level: var DateType = universe.DefineType("date", universePos, &dateType{}) In this case they're defined to be called from a package like types: var Date = &dateType{} I get these errors: test.go:58: o.lit undefined (cannot refer to unexported field lit) test.go:62: *dateType is not Type missing Pos() token.Position The code is based on: http://github.com/tav/go/blob/master/src/pkg/exp/eval/value.go http://github.com/tav/go/blob/master/src/pkg/exp/eval/type.go package main import ( "exp/eval" "fmt" // "go/token" ) // http://github.com/tav/go/blob/master/src/pkg/exp/eval/value.go type DateValue interface { eval.Value Get(*eval.Thread) string Set(*eval.Thread, string) } /* Date */ type dateV string func (v *dateV) String() string { return fmt.Sprint(*v) } func (v *dateV) Assign(t *eval.Thread, o eval.Value) { *v = dateV(o.(DateValue).Get(t)) } func (v *dateV) Get(*eval.Thread) string { return string(*v) } func (v *dateV) Set(t *eval.Thread, x string) { *v = dateV(x) } // http://github.com/tav/go/blob/master/src/pkg/exp/eval/type.go type Type interface { eval.Type // isDate returns true if this is a date type. isDate() bool } /* Common type */ type commonType struct{} // added func (commonType) isDate() bool { return false } /* Date */ type dateType struct { commonType } // * It should not be an universal constant //var universePos = token.Position{"<universe>", 0, 0, 0} // added //var DateType = universe.DefineType("date", universePos, &dateType{}) var Date = &dateType{} func (t *dateType) compat(o Type, conv bool) bool { t2, ok := o.lit().(*dateType) return ok && t == t2 } func (t *dateType) lit() Type { return t } func (t *dateType) isDate() bool { return true } func (t *dateType) String() string { return "<date>" } func (t *dateType) Zero() eval.Value { res := dateV("") return &res } /* Named types */ /* type NamedType struct { eval.NamedType Def Type }*/ type NamedType struct { // added // token.Position Name string // Underlying type. If incomplete is true, this will be nil. // If incomplete is false and this is still nil, then this is // a placeholder type representing an error. Def Type // True while this type is being defined. incomplete bool methods map[string]eval.Method } func (t *NamedType) isDate() bool { return t.Def.isDate() } /* *********************** */ func main() { print("foo") }

    Read the article

  • Add dpkg .symbols or.shlibs to package made using checkinstall

    - by Hubert Kario
    I have created a simple package using checkinstall of the Oracle Instantclient client libraries, the package installs without problem and is seen in the system. Problem is, that checkinstall doesn't create /var/lib/dpkg/info/oracle-instantclient11.2-basic.symbols or /var/lib/dpkg/info/oracle-instantclient11.2-basic.shlibs files so when I try to create another package (with proper build scripts) that depends on oracle-instantclient11.2-basic the build fails with dpkg-shlibdeps: error: no dependency information found for \ /usr/lib/libclntsh.so.11.1 (used by \ debian/libopendbx1-oracle/usr/lib/opendbx/liboraclebackend.so.1.2.0). dh_shlibdeps: dpkg-shlibdeps \ -Tdebian/libopendbx1-oracle.substvars \ debian/libopendbx1-oracle/usr/lib/opendbx/liboraclebackend.so.1.2.0 \ returned exit code 2 make: *** [binary-arch] Error 9 Is there an easy way to automatically create a package with .symbols or .shlibs files?

    Read the article

  • MySQL-python 1.2.3 and OS X 10.5: 64- or 32-bit?

    - by Dave Everitt
    I've been happily using Django and MySQL in development on an existing machine running OS X 10.4 Tiger, and have set up a similar environment in 10.5 Leopard on a new 64-bit MacBook, with a working MySQL and Python 2.6.4. However, now I want them to communicate, easy_install MySQL-python gave ld warnings that the file is not of the required architecture, which led me to test my Python 2.4.6 install (from the Mac OS X disc image): >>> import sys >>> sys.maxint 2147483647 Ah. So my Python install appears to be 32-bit and (I think?) won't install MySQL-python for my 64-bit MySQL. There are lots of hacks out there for MySQL-python on OS X (mostly 1.2.2), but - after hours of reading - I'm pretty sure they won't fix this architecture mismatch. So I'm stuck because I can't decide whether to: give up, remove the 64-bit MySQL install (thorough methods, please?) and use the 32-bit MySQL disc image instead; re-install Python in 64-bit mode from the tarball, --with-universal archs-64-bit and --enable-universalsdk= as detailed in Python.org's 2.6 news. So my questions for anyone who has encountered this issue are: Is installing 64-bit Python on OS X 10.5 worth bothering with? If so, (naive, lazy question!) how are the two required arguments combined? If I just skip along in 32-bit (as on my working setup) what am I missing? I'm after a hassle-free install that's easy to reproduce on other machines (possible student use) so I'd really welcome your opinions, please!

    Read the article

  • Apache RewriteRule with a RewriteMap variable substitution for the VAL argument to environment variable

    - by Eric
    I have an Apache server that serves up binary files to an application (not a browser). The application making the request wants the HTTP Content-MD5 header in HEX format. The default and only option within Apache is Base64. If I add "ContentDigest on" to my VirtualHost, I get this header in Base64. So I wrote a perl script, md5digesthex.pl, that gives me exactly what I want: MD5 in HEX format but I'm struggling with the RewriteRule to get my server to send the result. Here is my current Rewrite recipe: RewriteEngine on RewriteMap md5inhex prg:/www/download/md5digesthex.pl RewriteCond %{REQUEST_URI} ^/download/(.*) RewriteRule ^(.*) %{REQUEST_URI} [E=HASH:${md5inhex:$1}] Header set Content-MD5 "%{HASH}e" env=HASH The problem is that I can't seem to set the HASH environment variable based on the output of the md5inhex map function. It appears this behavior is not supported and I'm at a lost as to how to formulate this...

    Read the article

  • How to determine the data type of a CvMat

    - by Chris
    When using the CvMat type, the type of data is crucial to keeping your program running. For example, depending on whether your data is type float or unsigned char, you would choose one of these two commands: cvmGet(mat, row, col); cvGetReal2D(mat, row, col); Is there a universal approach to this? If the wrong data type matrix is passed to these calls, they crash at runtime. This is becoming an issue, since a function I have defined is getting passed several different types of matrices. How do you determine the data type of a matrix so you can always access its data? I tried using the "type()" function as such. CvMat* tmp_ptr = cvCreateMat(t_height,t_width,CV_8U); std::cout << "type = " << tmp_ptr->type() << std::endl; This does not compile, saying "term does not evaluate to a function taking 0 arguments". If I remove the brackets after the word type, I get a type of 1111638032 EDIT minimal application that reproduces this... int main( int argc, char** argv ) { CvMat *tmp2 = cvCreateMat(10,10, CV_32FC1); std::cout << "tmp2 type = " << tmp2->type << " and CV_32FC1 = " << CV_32FC1 << " and " << (tmp2->type == CV_32FC1) << std::endl; } Output: tmp2 type = 1111638021 and CV_32FC1 = 5 and 0

    Read the article

  • Is there a way to set access to WMI using GroupPolicy?

    - by Greg Domjan
    From various documentation it appears that to change WMI access you need to use WMI to access the running service and modify specific parts of the tree. Its kind of annoying changing 150,000 hosts using the UI. And then having to include such changes in the process of adding new hosts. Could write a script to do the same, but that needs to either connect to all those machines live, or be distributed for later update say in an startup/install script. And then you have to mess around with copying binary SD data from an example access control. I've also found you can change the wbem/*.mof file to include an SDDL but I'm really vague on how that all works at the moment. Am I just missing some point of simple administration?

    Read the article

  • How can I disable reverse DNS in Apache 2?

    - by Creighton Hale
    I want to disable reverse DNS in Apache 2. I have done the following steps: In apache2/apache2.conf file ,HostnameLookups is set as OFF Tcpdump session confirmed thatApache was doing double reverse lookups even though the HostnameLookupsdirective was clearly turned off. No hostnames insites-available. The problem still remains. UPD: version of apache is dpkg -l | grep apache2 ii apache2-mpm-prefork 2.2.16-6+squeeze4 Apache HTTP Server - traditional non-threaded model ii apache2-utils 2.2.16-6+squeeze4 utility programs for webservers ii apache2.2-bin 2.2.16-6+squeeze4 Apache HTTP Server common binary files ii apache2.2-common 2.2.16-6+squeeze4 Apache HTTP Server common files apache2 -l Compiled in modules: core.c mod_log_config.c mod_logio.c prefork.c http_core.c mod_so.c I think mod_security is not present.

    Read the article

  • chroot'ing SSH home directories, shell problem.

    - by Hamza
    Hi folks, I am trying to chroot my SSH users to their home directories and it seems to work.. in a strange way. Here is what I have in my sshd_config: Match group restricthome ChrootDirectory %h The permissions on the user directories looks like this: drwxr-xr-x 2 root root 1024 May 11 13:45 [user]/ And I can see that the user logs in successfully: May 11 13:49:23 box sshd[5695]: Accepted password for [user] from x.x.x.x port 2358 ssh2 (with no error messages after this) But after entering the password the PuTTY window closes down. This is a wild guess, but could it be because the user's shell is set to /bin/bash and it can't execute because of the chroot? If so, could you give me pointers on how to fix it? Would simply copying the bash binary into user's home directory and modyfying the shell work? How would I deal with the dependencies, ldd shows quite a few of those :) Comments/suggestions will be appreciated. Thanks.

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc.

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • FTP Reporting Disk Quota Exceeded

    - by Austin
    I am using Notepad++ with FTP_Synchronize to upload files to a server, however, it appears that it is not allowing my file to upload because apparently the "Disk Quota Exceeded" 11:18:49 > -> TYPE I 11:18:49 > Response (200): Type set to I 11:18:49 > -> PASV 11:18:49 > Response (227): Entering Passive Mode (*,*,*,*,*,*). 11:18:50 > -> STOR /home/*/../../var/www/html/test.html 11:18:50 > Response (150): Opening BINARY mode data connection for /home/*/../../var/www/html/test.html 11:18:50 > Response (552): Transfer aborted. Disk quota exceeded Now it may appear that yeah my Disk quota is exceeded, however I've gone to the back-end and saw: Total Used Bandwidth 107.055 MB Allowed Quota 3,000.0 MB Note: Stars were put in place for irrelevant data.

    Read the article

  • sar command to generate only CPU utilization and network statistics in Linux

    - by Vijay Shankar Kalyanaraman
    I want to be able to generate the system activity report every 30 seconds or every minute and store in a file and use it for diagnostic purposes on my VM. So I give an output file for the sar command and read it using the "-f" option. But I only use the CPU utilization and network utilization part of the report and so rest is all that I don't want to save (waste the space in the disk to store these reports). Also the sar files that are generated are all binary. Is there a way to collect these stats for the CPU and network utilization alone? and so save almost 2/3rds the space on the disk?

    Read the article

  • Cocoa App with Python extension which use Scipy -> ImportError: No module named scipy

    - by Snej
    Hi: I have installed Scipy (via macports) for Python on my Mac and it runs fine when running Python scripts. But now I'm using Scipy (via PyObjc) for calculations embedded in a Cocoa App frontend. The following error occurs: ImportError: No module named scipy I am using the "Python.framework" in XCode. Does anybody know why Scipy module is not found? I even added it manually to the module search path via sys.path.append("/opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/") EDIT: I found the problem myself. The path should be without "/scipy" at the end. But now I got an architecture problem: ImportError: dlopen(/opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so, 2): no suitable image found. Did find: /opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so: mach-o, but wrong architecture EDIT 2: I checked the architectures: Yes, sure it is an architecture problem. But when I run: file /opt/local/var/macports/software/py26-scipy/0.7.1_0+gcc43/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so I get a result Mach-O 64-bit bundle x86_64. And the Mac OS 10.6 PYTHON is: Mach-O universal binary with 3 architectures /usr/bin/python (for architecture x86_64): Mach-O 64-bit executable x86_64 /usr/bin/python (for architecture i386): Mach-O executable i386 /usr/bin/python (for architecture ppc7400): Mach-O executable ppc I build the XCode project as x86_64.

    Read the article

  • Are free operator->* overloads evil?

    - by Potatoswatter
    I was perusing section 13.5 after refuting the notion that built-in operators do not participate in overload resolution, and noticed that there is no section on operator->*. It is just a generic binary operator. Its brethren, operator->, operator*, and operator[], are all required to be non-static member functions. This precludes definition of a free function overload to an operator commonly used to obtain a reference from an object. But the uncommon operator->* is left out. In particular, operator[] has many similarities. It is binary (they missed a golden opportunity to make it n-ary), and it accepts some kind of container on the left and some kind of locator on the right. Its special-rules section, 13.5.5, doesn't seem to have any actual effect except to outlaw free functions. (And that restriction even precludes support for commutativity!) So, for example, this is perfectly legal (in C++0x, remove obvious stuff to translate to C++03): #include <utility> #include <iostream> #include <type_traits> using namespace std; template< class F, class S > typename common_type< F,S >::type operator->*( pair<F,S> const &l, bool r ) { return r? l.second : l.first; } template< class T > T & operator->*( pair<T,T> &l, bool r ) { return r? l.second : l.first; } template< class T > T & operator->*( bool l, pair<T,T> &r ) { return l? r.second : r.first; } int main() { auto x = make_pair( 1, 2.3 ); cerr << x->*false << " " << x->*4 << endl; auto y = make_pair( 5, 6 ); y->*(0) = 7; y->*0->*y = 8; // evaluates to 7->*y = y.second cerr << y.first << " " << y.second << endl; } I can certainly imagine myself giving into temp[la]tation. For example, scaled indexes for vector: v->*matrix_width[2][5] = x; Did the standards committee forget to prevent this, was it considered too ugly to bother, or are there real-world use cases?

    Read the article

  • Trouble subnetting...

    - by ???
    I have to learn how to subnet by hand for a test. And I'm having real problems doing it. I keep getting stuck. Here's an example: 138.248.184.17/18 - IP 255.255.192.0 - Subnet Mask 192 = 1100 0000 in binary And I know 184 in the IP address is the "octet of interest". OK I get that far...and then I'm lost. I know I need to set the network bits of 192(I think?) to all 0 for the network ID and then to all 1 for the broadcast ID. Problem is how do I know which part of 11000000 is network and which part is host?

    Read the article

  • Sharing storage on Linux and Solaris

    - by devlearn
    I'm looking for a solution in order to share a san mounted volume between several hosts running on Linux (RHEL) and/or Solaris (Sparc). Note that I basically need to share a set of directories containing large binary files that are accessed in random R/W mode. I have the following reqs : keep the data on the SAN suitable i/o performances as the software is pretty demanding on IOPS stick to a shared file system as I can't afford a cluster fs (lack of MDS/OSS infrastructure) compression could be really usefull For now I've found only the following candidates : GFS2 , supports Linux only, no compression VxFS , supports Linux and Solaris, compression supported So if you have some suggestions for this list, I'll really welcome them. Thanks in advance,

    Read the article

  • Silverlight Vs. WPF Vs. Winforms What is good for specifically my purpose?

    - by Cyril Gupta
    I am about to start a new Windows applications and the contenders for the platform are: Windows Forms WPF Silverlight Now my experience with WPF at least in my last application was not very encouraging (the app failed to run on the deployment machines and I had to re-do it in Winforms). So my confidence is shaken here. My app is for mass-distribution (the last version had some 100,000+ installations). So I want to make absolutely sure that my users will be able to use it and enjoy it without any problems. I would love to create a nice interface, going the next step like a Flex or Silverlight, iPhone app, with animations and effects. So I would really like to go with WPF or Silverlight if I can. My needs are Good support for visuals and animation effects. Support for database connectivity. Support for printing (Is there an equivalent of PrintDocument in Silverlight) Must not suffer from deployment troubles. Silverlight is universal, but does it have printing support and good controls toolset? WPF has printing support and a nice toolset, but can I depend on it? Winforms is dated already and is not so impressive, but should I go with it anyway? Your advice would be appreciated

    Read the article

  • Apache 2.2 Windows XP Uninstall Doesn't Remove All Files

    - by AJ
    Hello, I am trying to uninstall Apache 2.2 on Windows XP. The original installation was from the binary msi distribution. The uninstall function of the msi ran successfully, but it failed to remove a couple of folders: C:\Apache2.2\conf C:\Apache2.2\logs I am unable to remove these folders manually because they contain files for which I do not have ownership. And this is the source of my confusion: why if I installed the program do I not have permission to remove it? To be clear, I do not have local admin rights (nor can I request them), yet the files in these two remaining folders are owned by Administrator. How is it that Administrator created these files (and how can I possibly remove them)? Thanks, -aj

    Read the article

  • Subversion multi checkout post-commit hook?

    - by FLX
    The title must sound strange but I'm trying to achieve the following: SVN repo location: /home/flx/svn/flxdev SVN repo "flxdev" structure: + Project1 ++ files + Project2 + Project3 + Project4 I'm trying to set up a post-commit hook that automatically checks out on the other end when I do a commit. The post-commit doc explicitly lists the following: # POST-COMMIT HOOK # # The post-commit hook is invoked after a commit. Subversion runs # this hook by invoking a program (script, executable, binary, etc.) # named 'post-commit' (for which this file is a template) with the # following ordered arguments: # # [1] REPOS-PATH (the path to this repository) # [2] REV (the number of the revision just committed) So I made the following command to test: REPOS="$1" REV="$2" echo "Updated project $REPOS to $REV" However when I edit files in Project1 for example, this outputs "Updated project /home/flx/svn/flxdev to 1016" I'd like this to be: "Updated project Project1 to 1016" Having this variable allows me to specify to do different actions per project post-commit. How can I specify the project parameter? Thanks! Dennis

    Read the article

  • Subversion multi checkout post-commit hook?

    - by FLX
    The title must sound strange but I'm trying to achieve the following: SVN repo location: /home/flx/svn/flxdev SVN repo "flxdev" structure: + Project1 ++ files + Project2 + Project3 + Project4 I'm trying to set up a post-commit hook that automatically checks out on the other end when I do a commit. The post-commit doc explicitly lists the following: # POST-COMMIT HOOK # # The post-commit hook is invoked after a commit. Subversion runs # this hook by invoking a program (script, executable, binary, etc.) # named 'post-commit' (for which this file is a template) with the # following ordered arguments: # # [1] REPOS-PATH (the path to this repository) # [2] REV (the number of the revision just committed) So I made the following command to test: REPOS="$1" REV="$2" echo "Updated project $REPOS to $REV" However when I edit files in Project1 for example, this outputs "Updated project /home/flx/svn/flxdev to 1016" I'd like this to be: "Updated project Project1 to 1016" Having this variable allows me to specify to do different actions per project post-commit. How can I specify the project parameter? Thanks! Dennis

    Read the article

  • Table cell in a split-view controller - selected cell becomes deselected when called by reloadData

    - by bpapa
    I'm working on a universal app that uses a SplitViewController to present a master-detail view. In the iPad HIG on Split Views, Apple states: In general, indicate the current selection in the left pane in a persistent way. This behavior helps people understand the relationship between the item in the left pane and the contents of the right pane. This is important because the contents of the right pane can change, but they should always remain related to the item selected in the left pane. So I'm trying to maintain selection state on the left. Easy enough when the user taps, I just remove the deselectRowAtIndexPath:animated: message from tableView:didSelectRowAtIndexPath: implementation. But, I also want the selection state to show up by default (without a user tap). I wound up putting this in my tableView:cellForRowAtIndexPath: implementation: if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { if (cellShouldBeSelected) cell.selected = YES; else cell.selected = NO; } The behavior I'm seeing, is that when the cells finall appear, for a fraction of a section the cell is indeed selected, but then the selection disappears without any user interaction. Any ideas? I set the new clearsSelectionOnViewWillAppear property to NO, but that doesn't seem to fix it, and it shouldn't really matter because I'm marking the cell as selected long after viewWillAppear is called - I'm actually doing it after some network activity and then sending the table view a reloadData message.

    Read the article

  • Error while compiling/installing PHP with FPM for RPM on Centos 5.4 x64

    - by Raymond
    Hi, I'm trying to make an RPM with PHP 5.3.1 and PHP-FPM 0.6 for CentOS 5.4. So far it goes quite well, but when rpmbuild gets to the installation phase it fails with the following error: Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.63379 + umask 022 + cd /usr/src/redhat/BUILD + cd /usr/src/redhat/BUILD/php-5.3.1/fpm-build/ + make install Installing PHP SAPI module: fpm Installing PHP CLI binary: /usr/bin/ cp: cannot create regular file `/usr/bin/#INST@12668#': Permission denied make: *** [install-cli] Error 1 error: Bad exit status from /var/tmp/rpm-tmp.63379 (%install) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.63379 (%install) I am running rpmbuild as a normal user, so it's understandable that it will fail to install anything into /usr/bin, but it shouldn't try to install anything outside the buildroot in the first place. I have however specified the BuildRoot in the header of the spec file and I can see it is passed correctly to the make install command. Does anyone have some idea of what is going wrong here? Thanks a lot!

    Read the article

  • Old desktop programmer wants to create S+S project

    - by Craig
    I have an idea for a product that I want to be web-based. But because I live in Brasil, the internet is not always available so there needs to be a desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. So from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. I will probably start with a local ISP provider for the cloud servce. But then maybe move to something more "robust" and universal in the future, ie. Amazon, or Azure, or something along that line. My question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of. Thanks everyone, Craig

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >