Search Results

Search found 18729 results on 750 pages for 'edit'.

Page 618/750 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • iOS - Passing variable to view controller

    - by gj15987
    I have a view with a view controller and when I show this view on screen, I want to be able to pass variables to it from the calling class, so that I can set the values of labels etc. First, I just tried creating a property for one of the labels, and calling that from the calling class. For example: SetTeamsViewController *vc = [[SetTeamsViewController alloc] init]; vc.myLabel.text = self.teamCount; [self presentModalViewController:vc animated:YES]; [vc release]; However, this didn't work. So I tried creating a convenience initializer. SetTeamsViewController *vc = [[SetTeamsViewController alloc] initWithTeamCount:self.teamCount]; And then in the SetTeamsViewController I had - (id)initWithTeamCount:(int)teamCount { self = [super initWithNibName:nil bundle:nil]; if (self) { // Custom initialization self.teamCountLabel.text = [NSString stringWithFormat:@"%d",teamCount]; } return self; } However, this didn't work either. It's just loading whatever value I've given the label in the nib file. I've littered the code with NSLog()s and it is passing the correct variable values around, it's just not setting the label. Any help would be greatly appreciated. EDIT: I've just tried setting an instance variable in my designated initializer, and then setting the label in viewDidLoad and that works! Is this the best way to do this? Also, when dismissing this modal view controller, I update the text of a button in the view of the calling ViewController too. However, if I press this button again (to show the modal view again) whilst the other view is animating on screen, the button temporarily has it's original value again (from the nib). Does anyone know why this is?

    Read the article

  • apache proxy module gives 403 forbidden error

    - by naiquevin
    I am trying to use the apache's proxy module for working with xmpp on ubuntu desktop. For this i did the following things - 1) enabled mod_proxy by creating a symlink of proxy.conf, proxy.load and proxy_http.load from /etc/apache2/mods-available/ in the mods-enabled directory. 2) Added the following lines to the vhost <Proxy http://mydomain.com/httpbind> Order allow,deny Allow from all </Proxy> ProxyPass /httpbind http://mydomain.com:7070/http-bind/ ProxyPassReverse /httpbind http://mydomain.com:7070/http-bind/ I am new to using the proxy module but what i can make from the above lines is that requests to http://mydomain.com/httpbind will be forwarded to http://mydomain.com:7070/http-bind/. Kindly correct if wrong. 3) added rule Allow from .mydomain.com in /mods-available/proxy.conf Now i try to access http://mydomain.com/httpbind and it shows 403 Forbidden error.. What am i missing here ? Please help. thanks Edit : The problem got solved when i changed the following code in mods_available/proxy.conf <Proxy *> AddDefaultCharset off Order deny,allow Deny from all Allow from mydomain.com </Proxy> to <Proxy *> AddDefaultCharset off Order deny,allow #Deny from all Allow from all </Proxy> Didnt get what was wrong with the initial code though

    Read the article

  • Subset and lagging list data structure R

    - by user1234440
    I have a list that is indexed like the following: >list.stuff [[1]] [[1]]$vector ... [[1]]$matrix .... [[1]]$vector [[2]] null [[3]] [[3]]$vector ... [[3]]$matrix .... [[3]]$vector . . . Each segment in the list is indexed according to another vector of indexes: >index.list 1, 3, 5, 10, 15 In list.stuff, only at each of the indexes 1,3,5,10,15 will there be 2 vectors and one matrix; everything else will be null like [[2]]. What I want to do is to lag like the lag.xts function so that whatever is stored in [[1]] will be pushed to [[3]] and the last one drops off. This also requires subsetting the list, if its possible. I was wondering if there exists some functions that handle list manipulation. My thinking is that for xts, a time series can be extracted based on an index you supply: xts.object[index,] #returns the rows 1,3,5,10,15 From here I can lag it with: lag.xts(xts.object[index,]) Any help would be appreciated thanks: EDIT: Here is a reproducible example: list.stuff<-list() vec<-c(1,2,3,4,5,6,7,8,9) vec2<-c(1,2,3,4,5,6,7,8,9) mat<-matrix(c(1,2,3,4,5,6,7,8),4,2) list.vec.mat<-list(vec=vec,mat=mat,vec2=vec2) ind<-c(2,4,6,8,10) for(i in ind){ list.stuff[[i]]<-list.vec.mat }

    Read the article

  • How can I receive mouse events when a wrapped control has set capture?

    - by Greg
    My WndProc isn't seeing mouse-up notifications when I click with a modifier key (shift or control) pressed. I see them without the modifier key, and I see mouse-down notifications with the modifier keys. I'm trying to track user actions in a component I didn't write, so I'm using the Windows Forms NativeWindow wrapper (wrapping the component) to get Windows messages from the WndProc() method. I've tried tracking the notifications I do get, and I the only clue I see is WM_CAPTURECHANGED. I've tried calling SetCapture when I receive the WM_LBUTTONDOWN message, but it doesn't help. Without modifier (skipping paint, timer and NCHITTEST messages): WM_PARENTNOTIFY WM_MOUSEACTIVATE WM_MOUSEACTIVATE WM_SETCURSOR WM_LBUTTONDOWN WM_SETCURSOR WM_MOUSEMOVE WM_SETCURSOR WM_LBUTTONUP With modifier (skipping paint, timer and NCHITTEST messages): WM_KEYDOWN WM_PARENTNOTIFY WM_MOUSEACTIVATE WM_MOUSEACTIVATE WM_SETCURSOR WM_LBUTTONDOWN WM_SETCURSOR (repeats) WM_KEYDOWN (repeats) WM_KEYUP If I hold the mouse button down for a long time, I can usually get a WM_LBUTTONUP notification, but it should be possible to make it more responsive.. Edit: I've tried control-clicking outside of the component of interest and moving the cursor into it before releasing the mouse button, and then I do get a WM_LBUTTONUP notification, so it looks like the component is capturing the mouse on mouse-down. Is there any way to receive that notification when another window has captured the mouse? Thanks.

    Read the article

  • Compact data structure for storing a large set of integral values

    - by Odrade
    I'm working on an application that needs to pass around large sets of Int32 values. The sets are expected to contain ~1,000,000-50,000,000 items, where each item is a database key in the range 0-50,000,000. I expect distribution of ids in any given set to be effectively random over this range. The operations I need on the set are dirt simple: Add a new value Iterate over all of the values. There is a serious concern about the memory usage of these sets, so I'm looking for a data structure that can store the ids more efficiently than a simple List<int>or HashSet<int>. I've looked at BitArray, but that can be wasteful depending on how sparse the ids are. I've also considered a bitwise trie, but I'm unsure how to calculate the space efficiency of that solution for the expected data. A Bloom Filter would be great, if only I could tolerate the false negatives. I would appreciate any suggestions of data structures suitable for this purpose. I'm interested in both out-of-the-box and custom solutions. EDIT: To answer your questions: No, the items don't need to be sorted By "pass around" I mean both pass between methods and serialize and send over the wire. I clearly should have mentioned this. There could be a decent number of these sets in memory at once (~100).

    Read the article

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • methods of metaclasses on class instances.

    - by Stefano Borini
    I was wondering what happens to methods declared on a metaclass. I expected that if you declare a method on a metaclass, it will end up being a classmethod, however, the behavior is different. Example >>> class A(object): ... @classmethod ... def foo(cls): ... print "foo" ... >>> a=A() >>> a.foo() foo >>> A.foo() foo However, if I try to define a metaclass and give it a method foo, it seems to work the same for the class, not for the instance. >>> class Meta(type): ... def foo(self): ... print "foo" ... >>> class A(object): ... __metaclass__=Meta ... def __init__(self): ... print "hello" ... >>> >>> a=A() hello >>> A.foo() foo >>> a.foo() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'A' object has no attribute 'foo' What's going on here exactly ? edit: bumping the question

    Read the article

  • Problems updating a textBox ASP.NET

    - by Roger Filipe
    Hello, I'm starting in asp.net and am having some problems that I do not understand. The problem is this, I am building a site for news. Every news has a title and body. I have a page where I can insert news, this page uses a textbox for each of the fields (title and body), after clicking the submit button everything goes ok and saves the values in the database. And o have another page where I can read the news, I use labels for each of the camps, these labels are defined in the Page_Load. Now I'm having problems on the page where I can edit the news. I am loading two textboxes (title and body) in the Page_Load, so far so good, but then when I change the text and I click the submit button, it ignores the changes that I made in the text and saves the text loaded in Page_Load. This code doesn't show any database connection but you can understand what i'm talking about. protected void Page_Load(object sender, EventArgs e) { textboxTitle.Text = "This is the title of the news"; textboxBody.Text = "This is the body of the news "; } I load the page, make the changes in the text , and then click submit. protected void btnSubmit_Click(object sender, EventArgs e) { String title = textboxTitle.Text; String body = textboxBody.Text; Response.Write("Title: " + title + " || "); Response.Write("Body: " + body ); } Nothing happens, the text in the textboxes is always the one I loaded in the page_load, how do I update the Text in the textboxes?

    Read the article

  • LINQ-SQL Updating Multiple Rows in a single transaction

    - by RPM1984
    Hi guys, I need help re-factoring this legacy LINQ-SQL code which is generating around 100 update statements. I'll keep playing around with the best solution, but would appreciate some ideas/past experience with this issue. Here's my code: List<Foo> foos; int userId = 123; using (DataClassesDataContext db = new FooDatabase()) { foos = (from f in db.FooBars where f.UserId = userId select f).ToList(); foreach (FooBar fooBar in foos) { fooBar.IsFoo = false; } db.SubmitChanges() } Essentially i want to update the IsFoo field to false for all records that have a particular UserId value. Whats happening is the .ToList() is firing off a query to get all the FooBars for a particular user, then for each Foo object, its executing an UPDATE statement updating the IsFoo property. Can the above code be re-factored to one single UPDATE statement? Ideally, the only SQL i want fired is the below: UPDATE FooBars SET IsFoo = FALSE WHERE UserId = 123 EDIT Ok so looks like it cant be done without using db.ExecuteCommand. Grr...! What i'll probably end up doing is creating another extension method for the DLINQ namespace. Still require some hardcoding (ie writing "WHERE" and "UPDATE"), but at least it hides most of the implementation details away from the actual LINQ query syntax.

    Read the article

  • Threshold of blurry image - part 2

    - by 1''
    How can I threshold this blurry image to make the digits as clear as possible? In a previous post, I tried adaptively thresholding a blurry image (left), which resulted in distorted and disconnected digits (right): Since then, I've tried using a morphological closing operation as described in this post to make the brightness of the image uniform: If I adaptively threshold this image, I don't get significantly better results. However, because the brightness is approximately uniform, I can now use an ordinary threshold: This is a lot better than before, but I have two problems: I had to manually choose the threshold value. Although the closing operation results in uniform brightness, the level of brightness might be different for other images. Different parts of the image would do better with slight variations in the threshold level. For instance, the 9 and 7 in the top left come out partially faded and should have a lower threshold, while some of the 6s have fused into 8s and should have a higher threshold. I thought that going back to an adaptive threshold, but with a very large block size (1/9th of the image) would solve both problems. Instead, I end up with a weird "halo effect" where the centre of the image is a lot brighter, but the edges are about the same as the normally-thresholded image: Edit: remi suggested morphologically opening the thresholded image at the top right of this post. This doesn't work too well. Using elliptical kernels, only a 3x3 is small enough to avoid obliterating the image entirely, and even then there are significant breakages in the digits:

    Read the article

  • git changes modification time of files

    - by tanascius
    In the GitFaq I can read, that Git sets the current time as the timestamp on every file it modifies, but only those. However, I tried this command sequence (EDIT: added complete command sequence) $ git init test && cd test Initialized empty Git repository in d:/test/.git/ exxxxxxx@wxxxxxxx /d/test (master) $ touch filea fileb exxxxxxx@wxxxxxxx /d/test (master) $ git add . exxxxxxx@wxxxxxxx /d/test (master) $ git commit -m "first commit" [master (root-commit) fcaf171] first commit 0 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 filea create mode 100644 fileb exxxxxxx@wxxxxxxx /d/test (master) $ ls -l > filea exxxxxxx@wxxxxxxx /d/test (master) $ touch fileb -t 200912301000 exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 1 -rw-r--r-- 1 exxxxxxx Administ 132 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Dec 30 10:00 fileb exxxxxxx@wxxxxxxx /d/test (master) $ git status -a warning: LF will be replaced by CRLF in filea # On branch master warning: LF will be replaced by CRLF in filea # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: filea # exxxxxxx@wxxxxxxx /d/test (master) $ git checkout . exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 0 -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 fileb Now my question: Why did git change the timestamp of file fileb? I'd expect the timestamp to be unchanged. Are my commands causing a problem? Maybe it is possible to do something like a git checkout . --modified instead? I am using git version 1.6.5.1.1367.gcd48 under mingw32/windows xp.

    Read the article

  • Database design: Calculating the Account Balance

    - by 001
    How do I design the database to calculate the account balance? 1) Currently I calculate the account balance from the transaction table In my transaction table I have "description" and "amount" etc.. I would then add up all "amount" values and that would work out the user's account balance. I showed this to my friend and he said that is not a good solution, when my database grows its going to slow down???? He said I should create separate table to store the calculated account balance. If did this, I will have to maintain two tables, and its risky, the account balance table could go out of sync. Any suggestion? EDIT: OPTION 2: should I add an extra column to my transaction tables "Balance". now I do not need to go through many rows of data to perform my calculation. Example John buys $100 credit, he debt $60, he then adds $200 credit. Amount $100, Balance $100. Amount -$60, Balance $40. Amount $200, Balance $240.

    Read the article

  • SSIS - Skip Missing Files

    - by Greg
    I have a SSIS 2008 package that calls about 10 other SSIS packages (legacy issues, don't ask). Each of those child packages loads a specific file into a table. But sometimes one or more of these input files will be missing. How can I let a child package fail (because a file is missing) but let the rest of the parent package keep on running? I've tried increasing the maximum error count on the parent package, the tasks in the parent package that call each child, and in the child package itself. None of that seemed to make any difference. I still get this error when I run it with a file missing: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (2) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. Edit: failpackageonfailure and faulparentonfailure are already all set to false everywhere.

    Read the article

  • Castle, sharing a transient component between a decorator and a decorated component

    - by Marius
    Consider the following example: public interface ITask { void Execute(); } public class LoggingTaskRunner : ITask { private readonly ITask _taskToDecorate; private readonly MessageBuffer _messageBuffer; public LoggingTaskRunner(ITask taskToDecorate, MessageBuffer messageBuffer) { _taskToDecorate = taskToDecorate; _messageBuffer = messageBuffer; } public void Execute() { _taskToDecorate.Execute(); Log(_messageBuffer); } private void Log(MessageBuffer messageBuffer) {} } public class TaskRunner : ITask { public TaskRunner(MessageBuffer messageBuffer) { } public void Execute() { } } public class MessageBuffer { } public class Configuration { public void Configure() { IWindsorContainer container = null; container.Register( Component.For<MessageBuffer>() .LifeStyle.Transient); container.Register( Component.For<ITask>() .ImplementedBy<LoggingTaskRunner>() .ServiceOverrides(ServiceOverride.ForKey("taskToDecorate").Eq("task.to.decorate"))); container.Register( Component.For<ITask>() .ImplementedBy<TaskRunner>() .Named("task.to.decorate")); } } How can I make Windsor instantiate the "shared" transient component so that both "Decorator" and "Decorated" gets the same instance? Edit: since the design is being critiqued I am posting something closer to what is being done in the app. Maybe someone can suggest a better solution (if sharing the transient resource between a logger and the true task is considered a bad design)

    Read the article

  • Help me clean up this crazy lambda with the out keyword

    - by Sarah Vessels
    My code looks ugly, and I know there's got to be a better way of doing what I'm doing: private delegate string doStuff( PasswordEncrypter encrypter, RSAPublicKey publicKey, string privateKey, out string salt ); private bool tryEncryptPassword( doStuff encryptPassword, out string errorMessage ) { ...get some variables... string encryptedPassword = encryptPassword(encrypter, publicKey, privateKey, out salt); ... } This stuff so far doesn't bother me. It's how I'm calling tryEncryptPassword that looks so ugly, and has duplication because I call it from two methods: public bool method1(out string errorMessage) { string rawPassword = "foo"; return tryEncryptPassword( (PasswordEncrypter encrypter, RSAPublicKey publicKey, string privateKey, out string salt) => encrypter.EncryptPasswordAndDoStuff( // Overload 1 rawPassword, publicKey, privateKey, out salt ), out errorMessage ); } public bool method2(SecureString unencryptedPassword, out string errorMessage) { return tryEncryptPassword( (PasswordEncrypter encrypter, RSAPublicKey publicKey, string privateKey, out string salt) => encrypter.EncryptPasswordAndDoStuff( // Overload 2 unencryptedPassword, publicKey, privateKey, out salt ), out errorMessage ); } Two parts to the ugliness: I have to explicitly list all the parameter types in the lambda expression because of the single out parameter. The two overloads of EncryptPasswordAndDoStuff take all the same parameters except for the first parameter, which can either be a string or a SecureString. So method1 and method2 are pretty much identical, they just call different overloads of EncryptPasswordAndDoStuff. Any suggestions? Edit: if I apply Jeff's suggestions, I do the following call in method1: return tryEncryptPassword( (encrypter, publicKey, privateKey) => { var result = new EncryptionResult(); string salt; result.EncryptedValue = encrypter.EncryptPasswordAndDoStuff( rawPassword, publicKey, privateKey, out salt ); result.Salt = salt; return result; }, out errorMessage ); Much the same call is made in method2, just with a different first value to EncryptPasswordAndDoStuff. This is an improvement, but it still seems like a lot of duplicated code.

    Read the article

  • curl_multi_exec stops if one url is 404, how can I change that?

    - by Rob
    Currently, my cURL multi exec stops if one url it connects to doesn't work, so a few questions: 1: Why does it stop? That doesn't make sense to me. 2: How can I make it continue? EDIT: Here is my code: $SQL = mysql_query("SELECT url FROM shells") ; $mh = curl_multi_init(); $handles = array(); while($resultSet = mysql_fetch_array($SQL)){ //load the urls and send GET data $ch = curl_init($resultSet['url'] . $fullcurl); //Only load it for two seconds (Long enough to send the data) curl_setopt($ch, CURLOPT_TIMEOUT, 5); curl_multi_add_handle($mh, $ch); $handles[] = $ch; } // Create a status variable so we know when exec is done. $running = null; //execute the handles do { // Call exec. This call is non-blocking, meaning it works in the background. curl_multi_exec($mh,$running); // Sleep while it's executing. You could do other work here, if you have any. sleep(2); // Keep going until it's done. } while ($running > 0); // For loop to remove (close) the regular handles. foreach($handles as $ch) { // Remove the current array handle. curl_multi_remove_handle($mh, $ch); } // Close the multi handle curl_multi_close($mh);

    Read the article

  • Building a custom Linux Live CD

    - by Mike Heinz
    Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file!

    Read the article

  • Using a "take-home" coding component in interview process

    - by Jeff Sargent
    In recent interviews I have been asking candidates to code through some questions on the whiteboard. I don't feel I'm getting a clear enough picture of the candidates technical ability with this approach. Granted, the questions might not be good enough, maybe the interview needs to be longer, etc, but I'm wondering if a different approach would be better. What I'd like to try is to create a simple, working project in Visual Studio and have it checked into source control. The candidate can check that code out from home/wherever and then check back in work representing their response to the assignment that I'll provide. I'm thinking that if the window of time is short enough and the assignment clear enough then the solution will be safe enough from all-out Googling (i.e. they couldn't search for and find the entire solution online). I would then be able to review the candidates work. Has enough worked with something like this before, either to vet a candidate or as a candidate yourself? Any thoughts in general? P.S. my first StackOverflow question - hi guys and gals. EDIT: I've seen comments about asking someone to work for free - I wouldn't mind paying the person for their time.

    Read the article

  • Reporting Services "cannot connect to the report server database"

    - by Dano
    We have Reporting Services running, and twice in the past 6 months it has been down for 1-3 days, and suddenly it will start working again. The errors range from not being able to view the tree root in a browser, down to being able to insert parameters on a report, but crashing before the report can generate. Looking at the logs, there is 1 error and 1 warning which seem to correspond somewhat. ERROR:Event Type: Error Event Source: Report Server (SQL2K5) Event Category: Management Event ID: 107 Date: 2/13/2009 Time: 11:17:19 AM User: N/A Computer: ******** Description: Report Server (SQL2K5) cannot connect to the report server database. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. WARNING: always comes before the previous error Event code: 3005 Event message: An unhandled exception has occurred. Event time: 2/13/2009 11:06:48 AM Event time (UTC): 2/13/2009 5:06:48 PM Event ID: 2efdff9e05b14f4fb8dda5ebf16d6772 Event sequence: 550 Event occurrence: 5 Event detail code: 0 Process information: Process ID: 5368 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: ReportServerException Exception message: For more information about this error navigate to the report server on the local server machine, or enable remote errors. During the downtime we tried restarting everything from the server RS runs on, to the database it calls to fill reports with no success. When I came in monday morning it was working again. Anyone out there have any ideas on what could be causing these issues? Edit Tried both suggestions below several months ago to no avail. This issue hasn't arisen since, maybe something out of my control has changed....

    Read the article

  • How are you using C++0x today? [closed]

    - by Roger Pate
    This is a question in two parts, the first is the most important and concerns now: Are you following the design and evolution of C++0x? What blogs, newsgroups, committee papers, and other resources do you follow? Even where you're not using any new features, how have they affected your current choices? What new features are you using now, either in production or otherwise? The second part is a follow-up, concerning the new standard once it is final: Do you expect to use it immediately? What are you doing to prepare for C++0x, other than as listed for the previous questions? Obviously, compiler support must be there, but there's still co-workers, ancillary tools, and other factors to consider. What will most affect your adoption? Edit: The original really was too argumentative; however, I'm still interested in the underlying question, so I've tried to clean it up and hopefully make it acceptable. This seems a much better avenue than duplicating—even though some answers responded to the argumentative tone, they still apply to the extent that they addressed the questions, and all answers are community property to be cleaned up as appropriate, too.

    Read the article

  • WF -- how do I use a custom activity without creating it in a separate Workflow Activity Library?

    - by Kevin Craft
    I am trying to accomplish something that seems like it should be very simple. I have a State Machine Workflow Console Application with a workflow in it. I have created a custom activity for it. This activity will NEVER be used ANYWHERE ELSE. I just want to use this activity on my workflow, but: It does not appear in the toolbox. I cannot drag it from the Solution Explorer onto the workflow designer. I absolutely do not want to create a separate State Machine Workflow Activity Library, since that will just clutter my solution. Like I said, I will never use this activity in any other project, so I would like to keep it confined to this one...but I just can't figure out how to get it onto the designer! Am I going crazy!? Here is the code for the activity: public partial class GameSearchActivity: Activity { public GameSearchActivity() { InitializeComponent(); } public static DependencyProperty QueryProperty = System.Workflow.ComponentModel.DependencyProperty.Register("Query", typeof(string), typeof(GameSearchActivity)); [Description("Query")] [Category("Dependency Properties")] [Browsable(true)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)] public string Query { get { return ((string)(base.GetValue(GameSearchActivity.QueryProperty))); } set { base.SetValue(GameSearchActivity.QueryProperty, value); } } public static DependencyProperty ResultsProperty = System.Workflow.ComponentModel.DependencyProperty.Register("Results", typeof(string), typeof(GameSearchActivity)); [Description("Results")] [Category("Dependency Properties")] [Browsable(true)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)] public IEnumerable<Game_GamePlatform> Results { get { return ((IEnumerable<Game_GamePlatform>)(base.GetValue(GameSearchActivity.ResultsProperty))); } set { base.SetValue(GameSearchActivity.ResultsProperty, value); } } protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { IDataService ds = executionContext.GetService<IDataService>(); Results = ds.SearchGames(Query); return ActivityExecutionStatus.Closed; } } Thanks. EDIT: OK, so I've discovered that if I change the project type from Console Application to Class Library, the custom activity appears in the toolbox. However, this is not acceptable. It needs to be a Console/Windows Application. Anyone know a way around this?

    Read the article

  • Latex - Apply an operation to every character in a string

    - by hroest
    Hi I am using LaTeX and I have a problem concerning string manipulation. I want to have an operation applied to every character of a string, specifically I want to replace every character "x" with "\discretionary{}{}{}x". I want to do this because I have a long string (DNA) which I want to be able to separate at any point without hyphenation. Thus I would like to have a command called "myDNA" that will do this for me instead of inserting manually \discretionary{}{}{} after every character. Is this possible? I have looked around the web and there wasnt much helpful information on this topic (at least not any I could understand) and I hoped that you could help. --edit To clarify: What I want to see in the finished document is something like this: the dna sequence is CTAAAGAAAACAGGACGATTAGATGAGCTTGAGAAAGCCATCACCACTCA AATACTAAATGTGTTACCATACCAAGCACTTGCTCTGAAATTTGGGGACTGAGTACACCAAATACGATAG ATCAGTGGGATACAACAGGCCTTTACAGCTTCTCTGAACAAACCAGGTCTCTTGATGGTCGTCTCCAGGT ATCCCATCGAAAAGGATTGCCACATGTTATATATTGCCGATTATGGCGCTGGCCTGATCTTCACAGTCAT CATGAACTCAAGGCAATTGAAAACTGCGAATATGCTTTTAATCTTAAAAAGGATGAAGTATGTGTAAACC CTTACCACTATCAGAGAGTTGAGACACCAGTTTTGCCTCCAGTATTAGTGCCCCGACACACCGAGATCCT AACAGAACTTCCGCCTCTGGATGACTATACTCACTCCATTCCAGAAAACACTAACTTCCCAGCAGGAATT just plain linebreaks, without any hyphens. The DNA sequence will be one long string without any spaces or anything but it can break at any point. This is why my idea was to inesert a "\discretionary{}{}{}" after every character, so that it can break at any point without inserting any hyphens.

    Read the article

  • C# class can not disguise to be another class because GetType method cannot be override

    - by zinking
    there is a statement in the CLR via C# saying in C#, one class cannot disguise to be another, because GetType is virutal and thus it cannot be override but I think in C# we can still hide the parent implementation of GetType. I must missed something if I hide the base GetType implementation then I can disguise my class to be another class, is that correct? The key here is not whether GetType is virutal or not, the question is can we disguise one class to be another in C# Following is the NO.4 answer from the possible duplicate, so My question is more on this. is this kind of disguise possible, if so, how can we say that we can prevent class type disguise in C# ? regardless of the GetType is virtual or not While its true that you cannot override the object.GetType() method, you can use "new" to overload it completely, thereby spoofing another known type. This is interesting, however, I haven't figured out how to create an instance of the "Type" object from scratch, so the example below pretends to be another type. public class NotAString { private string m_RealString = string.Empty; public new Type GetType() { return m_RealString.GetType(); } } After creating an instance of this, (new NotAString()).GetType(), will indeed return the type for a string. share|edit|flag answered Mar 15 at 18:39 Dr Snooze 213 By almost anything that looks at GetType has an instance of object, or at the very least some base type that they control or can reason about. If you already have an instance of the most derived type then there is no need to call GetType on it. The point is as long as someone uses GetType on an object they can be sure it's the system's implementation, not any other custom definition. – Servy Mar 15 at 18:54 add comment

    Read the article

  • Jquery click event propagation

    - by ozsenegal
    I've a table with click events bind to it rows (tr). Also,there're A elements with it owns click events assigned inside those rows. Problem is when i click on A element,it also fires click event from TD.And Im dont want this behavior,i just want to fire A click's event. Code: //Event row TR $("tr:not(:first)").click(function(){ $(".window,.backFundo,.close").remove(); var position = $(this).offset().top; position = position < 0 ? 20 : position; $("body").append($("<div></div>").addClass("backFundo")); $("body").append($("<div></div>").addClass("window").html("<span class=close><img src=Images/close.png id=fechar /></span>").append("<span class=titulo>O que deseja fazer?</span><span class=crud><a href=# id=edit>Editar</a></span><span class=crud><a href=# id=delete codigo=" + $(this).children("td:first").html() + ">Excluir</a></span>").css({top:"20px"}).fadeIn("slow")); $(document).scrollTop(0); }); //Element event $("a").live("click",function(){alert("clicked!");}); Whenever you click the anchor it fires event from it parent row.Any ideas?

    Read the article

  • "variable tracking" is eating my compile time!

    - by wowus
    I have an auto-generated file which looks something like this... static void do_SomeFunc1(void* parameter) { // Do stuff. } // Continues on for another 4000 functions... void dispatch(int id, void* parameter) { switch(id) { case ::SomeClass1::id: return do_SomeFunc1(parameter); case ::SomeClass2::id: return do_SomeFunc2(parameter); // This continues for the next 4000 cases... } } When I build it like this, the build time is enormous. If I inline all the functions automagically into their respective cases using my script, the build time is cut in half. GCC 4.5.0 says ~50% of the build time is being taken up by "variable tracking" when I use -ftime-report. What does this mean and how can I speed compilation while still maintaining the superior cache locality of pulling out the functions from the switch? EDIT: Interestingly enough, the build time has exploded only on debug builds, as per the following profiling information of the whole project (which isn't just the file in question, but still a good metric; the file in question takes the most time to build): Debug: 8 minutes 50 seconds Release: 4 minutes, 25 seconds

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >