Search Results

Search found 421 results on 17 pages for 'dennis ditch'.

Page 16/17 | < Previous Page | 12 13 14 15 16 17  | Next Page >

  • Get to Know a Candidate (8 of 25): Rocky Anderson&ndash;Justice Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Ross Carl “Rocky” Anderson served two terms as the 33rd mayor of Salt Lake City, Utah, between 2000 and 2008.  He is the Executive Director of High Road for Human Rights.  Prior to serving as Mayor, he practiced law for 21 years in Salt Lake City, during which time he was listed in Best Lawyers in America, was rated A-V (highest rating) by Martindale-Hubbell, served as Chair of the Utah State Bar Litigation Section[4] and was Editor-in-Chief of, and a contributor to, Voir Dire legal journal. As mayor, Anderson rose to nationwide prominence as a champion of several national and international causes, including climate protection, immigration reform, restorative criminal justice, LGBT rights, and an end to the "war on drugs". Before and after the invasion by the U.S. of Iraq in 2003, Anderson was a leading opponent of the invasion and occupation of Iraq and related human rights abuses. Anderson was the only mayor of a major U.S. city who advocated for the impeachment of President George W. Bush, which he did in many venues throughout the United States. Anderson's work and advocacy led to local, national, and international recognition in numerous spheres, including being named by Business Week as one of the top twenty activists in the world on climate change,serving on the Newsweek Global Environmental Leadership Advisory Board, and being recognized by the Human Rights Campaign as one of the top ten straight advocates in the United States for LGBT equality. He has also received numerous awards for his work, including the EPA Climate Protection Award, the Sierra Club Distinguished Service Award, the Respect the Earth Planet Defender Award, the National Association of Hispanic Publications Presidential Award, The Drug Policy Alliance Richard J. Dennis Drugpeace Award, the Progressive Democrats of America Spine Award, the League of United Latin American Citizens Profile in Courage Award, the Bill of Rights Defense Committee Patriot Award, the Code Pink (Salt Lake City) Pink Star honor, the Morehouse University Gandhi, King, Ikeda Award, and the World Leadership Award for environmental programs. Formerly a member of the Democratic Party, Anderson expressed his disappointment with that Party in 2011, stating, “The Constitution has been eviscerated while Democrats have stood by with nary a whimper. It is a gutless, unprincipled party, bought and paid for by the same interests that buy and pay for the Republican Party." Anderson announced his intention to run for President in 2012 as a candidate for the newly-formed Justice Party. Although founded by Rocky Anderson of Utah, the Justice Party was first recognized by Mississippi and describes itself as advocating economic justice through measures such as green jobs and a right to organize, environment justice through enforcing employee safeguards in trade agreements, and social and civic justice through universal health care. In its first press release, the Utah Justice Party set forth its goals for justice in the economic, environmental, social and civic realms, along with a call to rid the corrupting influence of big money from government, to reverse the erosion of rights guaranteed by the Constitution, and to stop draining American resources to support illegal wars of aggression. Its press release says its grassroots supporters believe that now is the time for all to "shed their skeptical view that their voices don't matter", that "our 2-party system is a 'duopoly' controlled by the same corporate and military interests", and that the people must act to ensure "that our nation will achieve a brighter, sustainable future.” Anderson has ballot access in CO, CT, FL, ID, LA, MI, MN, MS, NJ, NM, OR, RI, TN, UT, VT, WA (152 electoral votes) and has write-in access in AL, AK, DE, GA, IL, IO, KS, MD, MO, NE, NH, NY, PA, TX Learn more about Rocky Anderson and Justice Party on Wikipedia.

    Read the article

  • Enumerating all strings in resx

    - by Erik Hesselink
    We would like to enumerate all strings in a resource file in .NET (resx file). We want this to generate a javascript object containing all these key-value pairs. We do this now for satellite assemblies with code like this (this is VB.NET, but any example code is fine): Dim rm As ResourceManager rm = New ResourceManager([resource name], [your assembly]) Dim Rs As ResourceSet Rs = rm.GetResourceSet(Thread.CurrentThread.CurrentCulture, True, True) For Each Kvp As DictionaryEntry In Rs [Write out Kvp.Key and Kvp.Value] Next However, we haven't found a way to do this for .resx files yet, sadly. How can we enumerate all localization strings in a resx file? UPDATE: Following Dennis Myren's comment and the ideas from here, I built a ResXResourceManager. Now I can do the same with .resx files as I did with the embedded resources. Here is the code. Note that Microsoft made a needed constructor private, so I use reflection to access it. You need full trust when using this. Imports System.Globalization Imports System.Reflection Imports System.Resources Imports System.Windows.Forms Public Class ResXResourceManager Inherits ResourceManager Public Sub New(ByVal BaseName As String, ByVal ResourceDir As String) Me.New(BaseName, ResourceDir, GetType(ResXResourceSet)) End Sub Protected Sub New(ByVal BaseName As String, ByVal ResourceDir As String, ByVal UsingResourceSet As Type) Dim BaseType As Type = Me.GetType().BaseType Dim Flags As BindingFlags = BindingFlags.NonPublic Or BindingFlags.Instance Dim Constructor As ConstructorInfo = BaseType.GetConstructor(Flags, Nothing, New Type() { GetType(String), GetType(String), GetType(Type) }, Nothing) Constructor.Invoke(Me, Flags, Nothing, New Object() { BaseName, ResourceDir, UsingResourceSet }, Nothing) End Sub Protected Overrides Function GetResourceFileName(ByVal culture As CultureInfo) As String Dim FileName As String FileName = MyBase.GetResourceFileName(culture) If FileName IsNot Nothing AndAlso FileName.Length > 10 Then Return FileName.Substring(0, FileName.Length - 10) & ".resx" End If Return Nothing End Function End Class

    Read the article

  • App crashes only after second execution only in Release configuration

    - by denbec
    Hey all, i know this is probably not an easy question to answer, as it's hard to describe on my hand. I have an app that runs without problems on the device in Debug Configuration (also multiple times). Once I put it into Release Configuration (which I need before publishing?), the app starts without problems and I can proceed to the next page, where I show an core-plot graph. BUT only if I run it from xcode. As soon as I end the App and start it again, it opens without problems, but on the next page, it crashes. Now I don't have anything to debug other than the crash report: Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0xcf10000a Crashed Thread: 0 Thread 0 Crashed: 0 libobjc.A.dylib 0x000026f2 objc_msgSend + 14 1 StandbyCheck 0x0001fbea -[CPXYTheme newGraph] (CPXYTheme.m:36) 2 StandbyCheck 0x00007c06 -[SCGraphCell initWithStyle:reuseIdentifier:] (SCGraphCell.m:28) 3 StandbyCheck 0x00076b4a -[TTTableViewDataSource tableView:cellForRowAtIndexPath:] (TTTableViewDataSource.m:128) 4 UIKit 0x0007797a -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:withIndexPath:] + 514 5 UIKit 0x000776b0 -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:] + 28 6 UIKit 0x00037e78 -[UITableView(_UITableViewPrivate) _updateVisibleCellsNow] + 940 7 UIKit 0x000367d4 -[UITableView layoutSubviews] + 176 8 StandbyCheck 0x000734b8 -[TTTableView layoutSubviews] (TTTableView.m:226) [...] Now, can someone point in any direction? What are the differences in Debug/Release Modes? How could I possibly debug this failure? I've been searching for hours now, please help me :( Thanks, Dennis

    Read the article

  • Any way to avoid creating a huge C# COM interface wrapper when only a few methods needed?

    - by Paul Accisano
    Greetings all, I’m working on a C# program that requires being able to get the index of the hot item in Windows 7 Explorer’s new ItemsView control. Fortunately, Microsoft has provided a way to do this through UI Automation, by querying custom properties of the control. Unfortunately, the System.Windows.Automation namespace inexplicably does not seem to provide a way to query custom properties! This leaves me with the undesirable position of having to completely ditch the C# Automation namespace and use only the unmanaged COM version. One way to do it would be to put all the Automation code in a separate C++/CLI module and call it from my C# application. However, I would like to avoid this option if possible, as it adds more files to my project, and I’d have to worry about 32/64-bit problems and such. The other option is to make use of the ComImport attribute to declare the relevant interfaces and do everything through COM-interop. This is what I would like to do. However, the relevant interfaces, such as IUIAutomation and IUIAutomationElement, are FREAKING HUGE. They have hundreds of methods in total, and reference tons and tons of interfaces (which I assume I would have to also declare), almost all of which I will never ever use. I don’t think the UI Automation interfaces are declared in any Type Library either, so I can’t use TLBIMP. Is there any way I can avoid having to manually translate a bajillion method signatures into C# and instead only declare the ten or so methods I actually need? I see that C# 4.0 added a new “dynamic” type that is supposed to ease COM interop; is that at all relevant to my problem? Thanks

    Read the article

  • ASP.NET MVC and NHibernate coupling

    - by Ben
    I have just started learning NHibernate. Over the past few months I have been using IoC / DI (structuremap) and the repository pattern and it has made my applications much more loosely coupled and easier to test. When switching my persistence layer to NHibernate I decided to stick with my repositories. Currently I am creating a new session on each method call but of course this means that I can not benefit from lazy loading. Therefore I wish to implement session-per-request but in doing so this will make my web project dependent on NHibernate (perhaps this is not such a bad thing?). I was planning to inject ISession into my repositories and create and dispose sessions on beginrequest/endrequest events (see http://ayende.com/Blog/archive/2009/08/05/do-you-need-a-framework.aspx) Is this a good approach? Presumably I cannot use session-per-request without having a reference to NHibernate in my web project? Having the web project dependent on NHibernate prompts my next (few) questions - why even bother with the repository? Since my web app is calling services that talk to the repositories, why not ditch the repositories and just add my NHibernate persistance code inside the services? And finally, is there really any need to split out into so many projects. Is a web project and an infrastructure project sufficient? I realise that I have veered off a bit from my original question but it seems that everyone seems to have their own opinion on these topics. Some people use the repository pattern with NHibernate, some don't. Some people stick their mapping files with the related classes, others have a separate project for this. Many thanks, Ben

    Read the article

  • Why can't I initialize a class through a setter?

    - by Rob emenaker
    If I have a custom class called Tires: #import <Foundation/Foundation.h> @interface Tires : NSObject { @private NSString *brand; int size; } @property (nonatomic,copy) NSString *brand; @property int size; - (id)init; - (void)dealloc; @end ============================================= #import "Tires.h" @implementation Tires @synthesize brand, size; - (id)init { if (self = [super init]) { [self setBrand:[[NSString alloc] initWithString:@""]]; [self setSize:0]; } return self; } - (void)dealloc { [super dealloc]; [brand release]; } @end And I synthesize a setter and getter in my View Controller: #import <UIKit/UIKit.h> #import "Tires.h" @interface testViewController : UIViewController { Tires *frontLeft, *frontRight, *backleft, *backRight; } @property (nonatomic,copy) Tires *frontLeft, *frontRight, *backleft, *backRight; @end ==================================== #import "testViewController.h" @implementation testViewController @synthesize frontLeft, frontRight, backleft, backRight; - (void)viewDidLoad { [super viewDidLoad]; [self setFrontLeft:[[Tires alloc] init]]; } - (void)dealloc { [super dealloc]; } @end It dies after [self setFrontLeft:[[Tires alloc] init]] comes back. It compiles just fine and when I run the debugger it actually gets all the way through the init method on Tires, but once it comes back it just dies and the view never appears. However if I change the viewDidLoad method to: - (void)viewDidLoad { [super viewDidLoad]; frontLeft = [[Tires alloc] init]; } It works just fine. I could just ditch the setter and access the frontLeft variable directly, but I was under the impression I should use setters and getters as much as possible and logically it seems like the setFrontLeft method should work. This brings up an additional question that my coworkers keep asking in these regards (we are all new to Objective-C); why use a setter and getter at all if you are in the same class as those setters and getters.

    Read the article

  • WPF Grid Column MaxWidth not enforced

    - by Trevor Hartman
    This problem stems from not being able to get my TextBlock to wrap. Basically as a last-ditch attempt I am setting MaxWidth on my container grid's columns. I was surprised to find that my child label and textbox still do whatever they want (bad children, BAD) and are not limited by my grid column's MaxWidth="200". What I'm really trying to do is let my TextBlock fill available width and wrap if necessary. So far after trying many variations of HorizontalAlignment="Stretch" on every known parent in the universe, nothing works, except setting an explicit MaxWidth="400" or whatever number on the TextBlock. This is not good because I need the TextBlock to fill available width, not be limited by some fixed number. Thanks! <ItemsControl> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <StackPanel /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <ItemsControl.ItemTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition MaxWidth="200" SharedSizeGroup="A" /> <ColumnDefinition MaxWidth="200" SharedSizeGroup="B" /> </Grid.ColumnDefinitions> <Label VerticalAlignment="Top" Margin="0 5 0 0" Grid.Column="0" Style="{StaticResource LabelStyle}" Width="Auto" Content="{Binding Value.Summary}" /> <TextBlock Grid.Column="1" Margin="5,8,5,8" FontWeight="Normal" Background="AliceBlue" Foreground="Black" Text="{Binding Value.Description}" HorizontalAlignment="Stretch" TextWrapping="Wrap" Height="Auto" /> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl>

    Read the article

  • How to recursive rake? -- or suitable alternatives

    - by TerryP
    I want my projects top level Rakefile to build things using rakefiles deeper in the tree; i.e. the top level rakefile says how to build the project (big picture) and the lower level ones build a specific module (local picture). There is of course a shared set of configuration for the minute details of doing that whenever it can be shared between tasks: so it is mostly about keeping the descriptions of what needs building, as close to the sources being built. E.g. /Source/Module/code.foo and cie should be built using the instructions in /Source/Module/Rakefile; and /Rakefile understands the dependencies between modules. I don't care if it uses multiple rake processes (ala recursive make), or just creates separate build environments. Either way it should be self-containable enough to be processed by a queue: so that non-dependent modules could be built simultaneously. The problem is, how the heck do you actually do something like that with Rake!? I haven't been able to find anything meaningful on the Internet, nor in the documentation. I tried creating a new Rake::Application object and setting it up, but whatever methods I try invoking, only exceptions or "Don't know how to build task ':default'" errors get thrown. (Yes, all rakefiles have a :default). Obviously one could just execute 'rake' in a sub directory for a :modulename task, but that would ditch the options given to the top level; e.g. think of $(MAKE) and $(MAKEFLAGS). Anyone have a clue on how to properly do something like a recursive rake?

    Read the article

  • ASP.NET MVC WAP, SharePoint Designer and SVN

    - by David Lively
    All, I'm starting a new ASP.NET MVC project which requires some content management capabilities. The people who will be managing the content prefer to use SharePoint Designer (successor to FrontPage) to modify content. I'd like to allow them to keep doing that. The issues are: Since I'd like this to be a WAP, not a website project, how can I allow them to see their changes in action without requiring them to have Visual Studio on their local machines? Can I specify a "default" action for a controller so that given a url like /products/new_view_here Can I let them save pages (views) and see them in the browser without having to go through the check-in/build/deploy process? I'd like their changes to be stored in SVN; SharePoint designer seems to only support Visual SourceSafe (ugh) directly. The ideas I've come up with so far are Write an HTTP handler that implements the FrontPage Server Extensions protocol. This sounds time consuming, but I haven't yet looked at the protocol spec. However, it would allow me to perform whatever operations I want on the server side, including checking files into SVN. Ditch the WAP in favor of a website project. I do not like having the source present on the server, however. Also, will MVC work in a website project? Surely someone has tackled this problem before?

    Read the article

  • How to parse text as JavaScript?

    - by Danjah
    This question of mine (currently unanswered), drove me toward finding a better solution to what I'm attempting. My requirements: chunks of code which can be arbitrarily added into a document, without an identifier: [div class="thing"] [elements... /] [/div] the objects are scanned for and found by an external script: var things = yd.getElementsBy(function(el){ return yd.hasClass('thing'); },null,document ); the objects must be individually configurable, what I have currently is identifier-based: [div class="thing" id="thing0"] [elements... /] [script type="text/javascript"] new Thing().init({ id:'thing0'; }); [/script] [/div] So I need to ditch the identifier (id="thing0") so there are no duplicates when more than one chunk of the same code is added to a page I still need to be able to config these objects individually, without an identifier SO! All of that said, I wondered about creating a dynamic global variable within the script block of each added chunk of code, within its script tag. As each 'thing' is found, I figure it would be legit to grab the innerHTML of the script tag and somehow convert that text into a useable JS object. Discuss. Ok, don't discuss if you like, but if you get the drift then feel free to correct my wayward thinking or provide a better solution - please! d

    Read the article

  • Why does a simple ApplicationSetting PropertyBinding for a Form does not work in C#?

    - by Mike Rosenblum
    My question involves this simple walkthrough shown in the article Preserve Size and Location of Windows Forms – Part I by Dennis Wallentin. This approach works 100% fine when using VB.NET. When using the same steps in C#, however, the settings within the Settings tab of the application's properties looks correct, and the app.config file looks right, but the values are not saved when you run it. The app.config file winds up looking like this: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="userSettings" type="System.Configuration.UserSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="WindowsAppCs.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowExeDefinition="MachineToLocalUser" requirePermission="false" /> </sectionGroup> </configSections> <userSettings> <WindowsAppCs.Properties.Settings> <setting name="Location" serializeAs="String"> <value>0, 0</value> </setting> <setting name="Size" serializeAs="String"> <value>284, 262</value> </setting> </WindowsAppCs.Properties.Settings> </userSettings> </configuration> It looks right to me, but the values do not update when running hosted within Visual Studio or when running the compiled EXE. I am sure that something very simple needs to be added or done, but I don't know what. Does anyone have any ideas here? Much thanks in advance...

    Read the article

  • Pointers, am I doing them correctly? Objective-c/cocoa

    - by Chris
    I have this in my @interface struct track currentTrack; struct track previousTrack; int anInt; Since these are not objects, I do not have to have them like int* anInt right? And if setting non-object values like ints, boolean, etc, I do not have to release the old value right (assuming non-GC environment)? The struct contains objects: typedef struct track { NSString* theId; NSString* title; } *track; Am I doing that correctly? Lastly, I access the struct like this: [currentTrack.title ...]; currentTrack.theId = @"asdf"; //LINE 1 I'm also manually managing the memory (from a setter) for the struct like this: [currentTrack.title autorelease]; currentTrack.title = [newTitle retain]; If I'm understanding the garbage collection correctly, I should be able to ditch that and just set it like LINE 1 (above)? Also with garbage collection, I don't need a dealloc method right? If I use garbage collection does this mean it only runs on OS 10.5+? And any other thing I should know before I switch to garbage collected code? Sorry there are so many questions. Very new to objective-c and desktop programming. Thanks

    Read the article

  • C# Late Binding for Parameterized Property

    - by optim
    I'm trying to use late binding to connect to a COM automation API provided by a program called Amibroker, using a C# WinForms project. So far I've been able to connect to everything in the API except one item, which I believe to be a "parameterized property" based on extensive Googling. Here's what the API specification looks like according to the docs (Full version here: http://www.amibroker.com/guide/objects.html): Property Filter(ByVal nType As Integer, ByVal pszCategory As String) As Long [r/w] A javascript snippet to update the value looks like this: AB = new ActiveXObject("Broker.Application"); AA = AB.Analysis; AA.Filter( 0, "market" ) = 0; Using the following C# late-binding code, I can get the value of the property, but I can't for the life of me figure out how to set the value: object[] parameter = new object[2]; parameter[0] = Number; parameter[1] = Type; object filters = _analysis.GetType().InvokeMember("Filter", BindingFlags.GetProperty, null, _analysis, parameter); So far I have tried: using BindingFlags.SetProperty, BindingFlags.SetField casting the returned object to a PropertyInfo object and trying to update the value using it adding extra object containing the value to the parameters object various other things as last-ditch efforts From what I can see, this should be straight-forward, but I'm finding the late binding in C# to be cumbersome at best. The property looks like a method call to me, which is what is throwing me off. How does one assign a value to a method, and what would the prototype for late-binding C# code look like for it? Hopefully that explains it well enough, but feel free to ask if I've left anything unclear. Thanks in advance for any help! Daniel

    Read the article

  • What could cause a PHP error on an include statement?

    - by J Jones
    I've got a bug in my PHP code that's been terrorizing me for several days now. I'm trying to clasp in a new module to an existing Magento (v1.4) site, though I'm very new to the Magento framework. I think I am pretty close to getting to "Hello, World" on a block that I want displayed in the backend, but I'm getting a 500 error when the menu item is selected. I managed to track it down (using echo stmts) to a line in the Layout.php file (app\code\core\Mage\Core\Model\Layout.php, line 472ish): if (class_exists($block, false) || mageFindClassFile($block)) { $temp = $block; echo "<p>before constructor: $temp</p>"; $block = new $block($attributes); echo "<p>after constructor: $temp</p>"; } For my block, this yields only "before constructor...", so I know this is what is failing. A little more debugging reveals that the class in $block (the new block I am trying to show) does not exist. I would have expected the __autoload function to take care of this, but none of my echos in __autoload are displaying. As a last ditch effort, I tried an include statement in Mage.php to the absolute location of the block class, but similar before and after echos reveal that that include statement becomes the breaking line. I'm tempted to start thinking "permissions", but I'm not well versed in the server management side of all this, and I have limited access to the test server (the test server belongs to the client). To anticipate the question: there are no errors reported in the PHP log file. I am actually not convinced that this site is reporting errors to the log file (I haven't seen anything from this site), though the client is certain that everything is turned on. IIS 7. Integrated mode, I'm pretty sure. Anyone know what could be causing this?

    Read the article

  • Java Play Mustache NPE Error

    - by zanedev
    We are getting a mustache play error in production (amazon linux EC2 AMI) but not in development (MACs) and we have tried upgrading the jvm, using the jdk instead, and changing from a tomcat deploy model to match our development environments as much as possible but nothing is working. Please any help would be greatly appreciated. We have lots of shared code in java and javascript using mustache and it would be a big deal to rewrite everything if we had to ditch mustache on the java side. 20:48:52,403 ERROR ~ @6al2dd0po Internal Server Error (500) for request GET /mystuff/people Execution exception (In {module:mustache-0.2}/app/play/modules/mustache/MustacheTags.java around line 32) NullPointerException occured : null play.exceptions.JavaExecutionException at play.templates.BaseTemplate.throwException(BaseTemplate.java:90) at play.templates.GroovyTemplate.internalRender(GroovyTemplate.java:257) at play.templates.Template.render(Template.java:26) at play.templates.GroovyTemplate.render(GroovyTemplate.java:187) at play.mvc.results.RenderTemplate.<init>(RenderTemplate.java:24) at play.mvc.Controller.renderTemplate(Controller.java:660) at play.mvc.Controller.renderTemplate(Controller.java:640) at play.mvc.Controller.render(Controller.java:695) at controllers.MyStuff.people(MyStuff.java:183) at play.mvc.ActionInvoker.invokeWithContinuation(ActionInvoker.java:548) at play.mvc.ActionInvoker.invoke(ActionInvoker.java:502) at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:478) at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:473) at play.mvc.ActionInvoker.invoke(ActionInvoker.java:161) at Invocation.HTTP Request(Play!) Caused by: java.lang.NullPointerException at play.modules.mustache.MustacheTags._template(MustacheTags.java:32) at play.modules.mustache.MustacheTags$_template.call(Unknown Source) at /app/views/User/people.html.(line:22) at play.templates.GroovyTemplate.internalRender(GroovyTemplate.java:232) ... 13 more

    Read the article

  • Google Maps API v3 not working

    - by user1496322
    I've been banging my head on the wall after going through the documentation on this several times! I can't seem to get past the API error to get the map to appear on my site. I am getting the following error message from the web page where I want the map to be displayed: ~~~~~~~~~~~ Google has disabled use of the Maps API for this application. The provided key is not a valid Google API Key, or it is not authorized for the Google Maps Javascript API v3 on this site. If you are the owner of this application, you can learn about obtaining a valid key here: https://developers.google.com/maps/documentation/javascript/tutorial#Obtaining_Key ~~~~~~~~~~~ I have (several times now) gone into my account and 1) enabled the Maps v3 API service. 2) Generated a new API key. and 3) added my allowed referrers to the key. (both www.domain.com and domain.com URLs) I have the following added to the head of the web page: < script src="http://maps.googleapis.com/maps/api/js?sensor=false&key=MY_API_KEY_HERE" type="text/JavaScript" language="JavaScript" And... I have the following javascript function that executes when a link is clicked on the page: alert("viewMap()"); var map = new GMap3(document.getElementById("map_canvas")); var geocoder = new GClientGeocoder(); var address = "1600 Amphitheatre Parkway, Mountain View"; alert("Calling getLatLng ..."); geocoder.getLatLng(address, function(point) { var latitude = point.y; var longitude = point.x; // do something with the lat lng alert("Lat:"+latitude+" - Lng:"+longitude); }); The initial 'viewMap' alert is displayed and then is followed by the 'Google has disbled use...' error message. The error console is also showing 'GMap3 is not defined'. Can anyone please assist with showing me the errors of my ways?!?!? Thank you in advance for any help you can provide. -Dennis

    Read the article

  • Can a WebServiceHost be changed to avoid the use of HttpListener?

    - by sbyse
    I am looking for a way to use a WCF WebServiceHost without having to rely on the HttpListener class and it's associated permission problems (see this question for details). I'm working on a application which communicates locally with another (third-party) application via their REST API. At the moment we are using WCF as an embedded HTTP server. We create a WebServiceHost as follows: String hostPath = "http://localhost:" + portNo; WebServiceHost host = new WebServiceHost(typeof(IntegrationService), new Uri(hostPath)); // create a webhttpbinding for rest/pox and enable cookie support for session management WebHttpBinding webHttpBinding = new WebHttpBinding(); webHttpBinding.AllowCookies = true; ServiceEndpoint ep = host.AddServiceEndpoint(typeof(IIntegrationService), webHttpBinding, ""); host.Open() ChannelFactory<IIntegrationService> cf = new ChannelFactory<IIntegrationService>(webHttpBinding, hostPath); IIntegrationService channel = cf.CreateChannel(); Everything works nicely as long as our application is run as administrator. If we run our application on a machine without administrative privileges the host.Open() will throw an HttpListenerException with ErrorCode == 5 (ERROR_ACCESS_DENIED). We can get around the problem by running httpcfg.exe from the command line but this is a one-click desktop application and that's not really as long term solution for us. We could ditch WCF and write our own HTTP server but I'd like to avoid that if possible. What's the easiest way to replace HttpListener with a standard TCP socket while still using all of the remaining HTTP scaffolding that WCF provides?

    Read the article

  • Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

    - by Jeffrey Kern
    I was going to write a long-winded post, but I'll boil it down here: I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS. I think it is because of my object-orented code. Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes. What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster. I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them. As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here. Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)

    Read the article

  • Memory Management with returning char* function

    - by RageD
    Hello all, Today, without much thought, I wrote a simple function return to a char* based on a switch statement of given enum values. This, however, made me wonder how I could release that memory. What I did was something like this: char* func() { char* retval = new char; // Switch blah blah - will always return some value other than NULL since default: return retval; } I apologize if this is a naive question, but what is the best way to release the memory seeing as I cannot delete the memory after the return and, obviously, if I delete it before, I won't have a returned value. What I was thinking as a viable solution was something like this void func(char*& in) { // blah blah switch make it do something } int main() { char* val = new char; func(val); // Do whatever with func (normally func within a data structure with specific enum set so could run multiple times to change output) val = NULL; delete val; val = NULL; return 0; } Would anyone have anymore insight on this and/or explanation on which to use? Regards, Dennis M.

    Read the article

  • chrooting user causes "connection closed" message when using sftp

    - by George Reith
    First off I am a linux newbie so please don't assume much knowledge. I am using CentOS 5.8 (final) and using OpenSSH version 5.8p1. I have made a user playwithbits and I am attempting to chroot them to the directory home/nginx/domains/playwithbits/public I am using the following match statement in my sshd_config file: Match group web-root-locked ChrootDirectory /home/nginx/domains/%u/public X11Forwarding no AllowTcpForwarding no ForceCommand /usr/libexec/openssh/sftp-server # id playwithbits returns: uid=504(playwithbits) gid=504(playwithbits) groups=504(playwithbits),507(web-root-locked) I have changed the user's home directory to: home/nginx/domains/playwithbits/public Now when I attempt to sftp in with this user I instantly get the message: connection closed Does anyone know what I am doing wrong? Edit: Following advice from @Dennis Williamson I have connected in debug mode (I think... correct me if I'm wrong). I have made a bit of progress by using chmod to set permissions recursively of all files in the directly to 700. Now I get the following messages when I attempt to log on (still connection refused): Connection from [My ip address] port 38737 debug1: Client protocol version 2.0; client software version OpenSSH_5.6 debug1: match: OpenSSH_5.6 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: permanently_set_uid: 74/74 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user playwithbits service ssh-connection method none debug1: attempt 0 failures 0 debug1: user playwithbits matched group list web-root-locked at line 91 debug1: PAM: initializing for "playwithbits" debug1: PAM: setting PAM_RHOST to [My host info] debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user playwithbits service ssh-connection method password debug1: attempt 1 failures 0 debug1: PAM: password authentication accepted for playwithbits debug1: do_pam_account: called Accepted password for playwithbits from [My ip address] port 38737 ssh2 debug1: monitor_child_preauth: playwithbits has been authenticated by privileged process debug1: SELinux support disabled debug1: PAM: establishing credentials User child is on pid 3942 debug1: PAM: establishing credentials Changed root directory to "/home/nginx/domains/playwithbits/public" debug1: permanently_set_uid: 504/504 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 2097152 max 32768 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request env reply 0 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req env debug1: server_input_channel_req: channel 0 request subsystem reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req subsystem subsystem request for sftp by user playwithbits debug1: subsystem: cannot stat /usr/libexec/openssh/sftp-server: Permission denied debug1: subsystem: exec() /usr/libexec/openssh/sftp-server debug1: Forced command (config) '/usr/libexec/openssh/sftp-server' debug1: session_new: session 0 debug1: Received SIGCHLD. debug1: session_by_pid: pid 3943 debug1: session_exit_message: session 0 channel 0 pid 3943 debug1: session_exit_message: release channel 0 debug1: session_by_channel: session 0 channel 0 debug1: session_close_by_channel: channel 0 child 0 debug1: session_close: session 0 pid 0 debug1: channel 0: free: server-session, nchannels 1 Received disconnect from [My ip address]: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials

    Read the article

  • Virtual Private Hosting DNS configuration

    - by Ciel
    I did a great deal of reading here before posting this because I didn't want to post a duplicate - but I'm on a bit of a deadline and getting frustrated, so here goes... I very, very, very sincerely apologize if this is long winded or hard to read. Please - please just ask for any information or clarification and I will give it as quickly as I possibly can. This has become very frustrating to me and this is the last place I know to turn. I have no experience with setting up DNS, no experience with nameservers, and no peers to go to for help. So this is kind of my last ditch effort. The task of setting up a private server has, through circumstances beyond my control, fallen into my lap. I own a domain (hereafter referred to as yyy.com) and have always used shared hosting - I buy a package and just point it to the domain nameservers they give me. It's always been simple. yyy.com is registered with network solutions Now I have purchased a Virtual Private Hosting package from GoDaddy.com - and it comes with Plesk 11. I have no earthly idea how to begin to get the right nameserver for yyy.com. I have gone through the instructions and have wound up exceedingly frustrated. I have 2 IP addresses from GoDaddy for the server. This is what I have so far, and I cannot tell if it is working (Since propogation takes so long, it is extremely hard to test for me) IP 1 : XX.XX.XX.XX IP 2 : YY.YY.YY.YY (obviously hidden for privacy) Now after going through the documentation setup and waiting a few days, this is the setup I have - and so far it does not appear to be working. Host Record type Value XX.XX.XX.XX / 24 PTR yyy.com. yyy.com. NS ns1.yyy.com. yyy.com. A XX.XX.XX.XX yyy.com. MX (10) mail.yyy.com. ftp.yyy.com. CNAME yyy.com. ipv4.yyy.com. A XX.XX.XX.XX mail.yyy.com. A XX.XX.XX.XX mssql.yyy.com. A XX.XX.XX.XX ns1.yyy.com. A XX.XX.XX.XX ns2.yyy.com. A YY.YY.YY.YY webmail.yyy.com. A XX.XX.XX.XX www.yyy.com. CNAME yyy.com. yyy.com is pointing to both ns1.yyy.com and ns2.yyy.com Can anyone give me some assistance here? This is a learning experience for me and days of documentation have left me very confused.

    Read the article

  • Sharing files between multiple sites using only desktop software

    - by perlyking
    Our organisation has three sites; a head office, where the master copies of company files are stored, plus two branch offices using only workstations and a NAS or two. Currently we're talking about <10GB. At the main office, we have no admin access to the file server, as this is entirely controlled by the larger institution where we are located. For the same reason, we have no VPN remote access to this network. Instead, we simply have access to a network share using over a Novell LAN. Question: how can we share files between offices in way that minimises latency, i.e. that gives us a mirror of the main network share at each site? (There is little likelihood of concurrent editing, and we can live with the odd file conflict now and again). Up to now branch office staff have had to use GotoMyPC-type solutions to remotely access files held at the main office. Or email. I was hoping to use Google Drive on a dedicated workstation at each office to sync the contents of the network share (head office) or NAS (branch offices) via the cloud, but at my last attempt (29 Jun '12), the Google Drive installer would not allow me to designate the remote network share as the "target" folder. (I chose Google Drive over Drobbox et al. as we already use GMail for corporate mail) The next idea was to use a designated workstation at head office to mirror the network share to a local drive, then use Google Drive to push that to the cloud. This seems a step too far. Nor do I have any good ideas about how to achieve this network/local mirroring, as we can't, for example, install the rsync daemon on the server. I do not want to use Google Drive locally on each workstation as this will inconvenience users, and more importantly, move files off the backed-up, well-maintained (UPS, RAID etc) network share at head office. Our budget is only in the £100's. Should we perhaps just ditch the head office server and use something like JungleDisk? At least this presents the user with what appears to be a mapped drive.

    Read the article

  • Top 10 Tips & Tricks for Oracle SQL Developer

    - by thatjeffsmith
    Being a short week due to the holiday, and with everyone enjoying their Summer vacations (apologies Southern Hemispherians), I reckoned it was a great time to do one of those lazy recap-Top 10-Reader’s Digest type posts. I’ve been sharing 1-3 tips or ‘tricks’ a week since I started blogging about SQL Developer, and I have more than enough content to write a book. But since I’m lazy, I’m just going to compile a list of my favorite ‘must know’ tips instead. I always have to leave out a few tips when I do my presentations, so now I can refer back to this list to make sure I’m not forgetting anything. So without further ado… 1. Configure Your Preferences Yes, there are a LOT of options. But you don’t need to worry about all of them just yet. I do recommend you take a quick look at these ones in particular. Whether you’re new to the tool or have been using it for 5 years, don’t overlook these settings! 2. Disable Extensions You Aren’t Using If you’re not using Data Miner, or if you’re not working on a Migration – disable those extensions! SQL Developer will run leaner & meaner, plus the user interface will be a bit more simplified making the tool easier to navigate as well. 3. SQL Recall via Keyboard Access your history via the keyboard! Cycle through your recent SQL statements just using these magic key strokes! Ctrl+Up or Ctrl+Down. 4. Format Your Query Output Directly to CSV, XML, HTML, etc Have the query results pre-formatted in the format of your choice! Too lazy to run the Export wizard for your query result sets? Just add the SQL Developer output hints to your statement and have the output auto-magically formatted to the style of your choice! 5. Drag & Drop Multiple Tables to the Worksheet SQL Developer will auto-join the related objects. You can then toggle over to the Query Builder to toggle off the columns you don’t want to query. I guarantee this tip will save you time if you’re joining 3 or more tables! 6. Drag & Drop Multiple Tables to a Relational Model A pretty picture is worth a few dozen DDL scripts? SQL Developer does data modeling! If you ctrl-drag a table to a model, it will take that table and any related tables and reverse engineer them to a relational model! You can then print it out or export it to HTML, PDF, etc. 7. View Your PL/SQL Execution Output Automatically Function returns a refcursor? Procedure had 3 out parameters? When you run these programs via the Procedure Editor, we automatically capture the output and place them into one or more data grids for you to browse. 8. Disable Automatic Code Insight and Use It On-Demand Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) Some folks really don’t like it when their IDEs or word-processors try to do ‘too much’ for them. Thankfully SQL Developer allows you to either increase the delay before it attempts to auto-complete your text OR to disable the automatic bit. Instead, you can invoke it on-demand. 9. Interactive Debugging – Change Your Variable Values as You Step Through Your PLSQL Watches aren’t just for watching. You can actually interact with your programs and ‘see what happens’ when X = 256 instead of 1. 10. Ditch the Tree View for the Schema Browser There’s nothing wrong with the Connection tree for browsing your database objects. But some folks just can’t seem to get comfortable with it. So, we built them a Schema Browser that uses a drop down control instead for changing up your schema and object types. Already Know This Stuff, Want More? Just check out my SQL Developer resource page, it’s one of the main links on the top of this page. Or if you can’t find something, just drop me a note in the form of a comment on this page and I’ll do my best to find it or write it for you.

    Read the article

  • FireFox 6 Super Slow? Cache Settings Corruption

    - by Rick Strahl
    For those of you that follow me on Twitter, you've probably seen some of my tweets regarding major performance problems I've seen with the install of FireFox 6.0. FireFox 6.0 was released a couple of weeks ago and is treated as a 'force feed' update for FireFox 5.0. I'm not sure what the deal is with this braindead versioning that Mozilla is doing with major version releases coming out, what now every other month? Seriously that's retarded especially given the limited number of new features these releases bring, and the upgrade pain for plug-ins that the major version release causes. Anyway, after the FireFox updater bugged me long enough I finally gave in last week and updated to FireFox 6. Immediately after install I noticed terrible performance. Everything was running at a snail's pace with Web pages loading slowly and most content actually slowly 'painting' the page. A typical sign of content downloading slowly. However these are pages that should be mostly cached on my system and even repeated accesses ran just as slow. Just for a reality check I ran the same sites in Chrome (blazing fast) and IE (fast enough :-)) but FireFox - dog on a stick. Why so slow Boss? While complaining lots of people recommended to ditch FireFox - use Chrome, yada yada yada. Yeah, Chrome is fast and getting better but I have a number of plug-ins that I use in FF that I can't easily give up. So I suffered and started looking around more closely at what was happening. The first thing I noticed when accessing pages was that I continually saw accesses to the Google CDN downloading jQuery and jQuery UI. UI especially is pretty heavy in size and currently I'm in a location with a fairly slow IP connection where large files are a bit of an issue. However, seeing the CDN urls pop up repeatedly raised a flag with me. That stuff should be caching and it looked like each and every hit was reloading these scripts and various images over and over again. Fired up FireBug and sure enough I saw something like this on a repeated hit to my blog: Those two highlights are jquery and the main CSS file for the site and both are being loaded fully and taking a while to load. However, since this page had been loaded before, these items should be cached and show 304 requests instead of the full HTTP requests returning 200 result codes. In short it looked like FireFox was not caching ANY content at all and constantly reloading all page resources. No wonder things were running dog slow. Once I realized what the problem was I took a look in the about:config settings and lo and behold a bunch of the cache settings were set to not cache: In my case ALL the main cache flags were set to false for some reason that I can't figure out.  It appears that after the FireFox 6 update these flags somehow mysteriously changed and performance took a nose dive. Switching the .enable flags back to true and resetting all the cache settings tote default reverted performance back to the way it's supposed to be: reasonably fast and snappy as soon as content is cached and accessed again  from cache. I try not to muck with the about:config settings much (other than turning off the IPV6 option) but when there are problems access to these features can be really nice. However, I treat this as a last resort so it took me quite some time before I started looking through ALL the settings. This takes a while, not knowing what I was looking for exactly. If Web load performance is slow it's a good idea to check the cache settings. I have no idea what hosed these settings for me - I certainly didn't explicitly set them in about:config and while in FireFox's Options dialog I didn't see any option that would affect global caching like this, so this remains a mystery to me. Anyway, I hope that this is helpful to some, in case some of you end up running into a similar issue.© Rick Strahl, West Wind Technologies, 2005-2011Posted in FireFox   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Amanda Todd&ndash;What Parents Can Learn From Her Story

    - by D'Arcy Lussier
    Amanda Todd was a bullied teenager who committed suicide this week. Her story has become headline news due in part to her You Tube video she posted telling her story:   The story is heartbreaking for so many reasons, but I wanted to talk about what we as parents can learn from this. Being the dad to two girls, one that’s 10, I’m very aware of the dangers that the internet holds. When I saw her story, one thing jumped out at me – unmonitored internet access at an early age. My daughter (then 9) came home from a friends place once and asked if she could be in a YouTube video with her friend. Apparently this friend was allowed to do whatever she wanted on the internet, including posting goofy videos. This set off warning bells and we ensured our daughter realized the dangers and that she was not to ever post videos of herself. In looking at Amanda’s story, the access to unmonitored internet time along with just being a young girl and being flattered by an online predator were the key events that ultimately led to her suicide. Yes, the reaction of her classmates and “friends” was horrible as well, I’m not diluting that. But our youth don’t fully understand yet that what they do on the internet today will follow them potentially forever. And the people they meet online aren’t necessarily who they claim to be. So what can we as parents learn from Amanda’s story? Parents Shouldn’t Feel Bad About Being Internet Police Our job as parents is in part to protect our kids and keep them safe, even if they don’t like our measures. This includes monitoring, supervising, and restricting their internet activities. In our house we have a family computer in the living room that the kids can watch videos and surf the web. It’s in plain view of everyone, so you can’t hide what you’re looking at. If our daughter goes to a friend’s place, we ask about what they did and what they played. If the computer comes up, we ask about what they did on it. Luckily our daughter is very up front and honest in telling us things, so we have very open discussions. Parents Need to Be Honest About the Dangers of the Internet I’m sure every generation says that “kids grow up so fast these days”, but in our case the internet really does push our kids to be exposed to things they otherwise wouldn’t experience. One wrong word in a Google search, a click of a link in a spam email, or just general curiosity can expose a child to things they aren’t ready for or should never be exposed to (and I’m not just talking about adult material – have you seen some of the graphic pictures from war zones posted on news sites recently?). Our stance as parents has been to be open about discussing the dangers with our kids before they encounter any content – be proactive instead of reactionary. Part of this is alerting them to the monsters that lurk on the internet as well. As kids explore the world wide web, they’re eventually going to encounter some chat room or some Facebook friend invite or other personal connection with someone. More than ever kids need to be educated on the dangers of engaging with people online and sharing personal information. You can think of it as an evolved discussion that our parents had with us about using the phone: “Don’t say ‘I’m home alone’, don’t say when mom or dad get home, don’t tell them any information, etc.” Parents Need to Talk Self Worth at Home Katie makes the point better than I ever could (one bad word towards the end): Our children need to understand their value beyond what the latest issue of TigerBeat says, or the media who continues flaunting physical attributes over intelligence and character, or a society that puts focus on status and wealth. They also have to realize that just because someone pays you a compliment, that doesn’t mean you should ignore personal boundaries and limits. What does this have to do with the internet? Well, in days past if you wanted to be social you had to go out somewhere. Now you can video chat with any number of people from the comfort of wherever your laptop happens to be – and not just text but full HD video with sound! While innocent children head online in the hopes of meeting cool people, predators with bad intentions are heading online too. As much as we try to monitor their online activity and be honest about the dangers of the internet, the human side of our kids isn’t something we can control. But we can try to influence them to see themselves as not needing to search out the acceptance of complete strangers online. Way easier said than done, but ensuring self-worth is something discussed, encouraged, and celebrated is a step in the right direction. Parental Wake Up Call This post is not a critique of Amanda’s parents. The reality is that cyber bullying/abuse is happening every day, and there are millions of parents that have no clue its happening to their children. Amanda’s story is a wake up call that our children’s online activities may be putting them in danger. My heart goes out to the parents of this girl. As a father of daughters, I can’t imagine what I would do if I found my daughter having to hide in a ditch to avoid a mob or call 911 to report my daughter had attempted suicide by drinking bleach or deal with a child turning to drugs/alcohol/cutting to cope. It would be horrendous if we as parents didn’t re-evaluate our family internet policies in light of this event. And in the end, Amanda’s video was meant to bring attention to her plight and encourage others going through the same thing. We may not be kids, but we can still honour her memory by helping safeguard our children.

    Read the article

< Previous Page | 12 13 14 15 16 17  | Next Page >