Search Results

Search found 133 results on 6 pages for 'trevor bekolay'.

Page 5/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Drupal module permissions

    - by Trevor Newhook
    When I run the code with an admin user, the module returns what it should. However, when I run it with a normal user, I get a 403 error. The module returns data from an AJAX call. I've already tried adding a 'access callback' = 'user_access'); line to the exoticlang_chat_logger_menu() function. I'd appreciate any pointers you might have. Thanks for the help The AJAX call: jQuery.ajax({ type: 'POST', url: '/chatlog', success: exoticlangAjaxCompleted, data:'messageLog=' + privateMessageLogJson, dataType: 'json' }); The module code: function exoticlang_chat_logger_init(){ drupal_add_js('misc/jquery.form.js'); drupal_add_library('system', 'drupal.ajax'); } function exoticlang_chat_logger_permission() { return array( 'Save chat data' => array( 'title' => t('Save ExoticLang Chat Data'), 'description' => t('Send private message on chat close') ), ); } /** * Implementation of hook_menu(). */ function exoticlang_chat_logger_menu() { $items = array(); $items['chatlog'] = array( 'type' => MENU_CALLBACK, 'page callback' => 'exoticlang_chat_log_ajax', 'access arguments' => 'Save chat data'); //'access callback' => 'user_access'); return $items; } function exoticlang_chat_logger_ajax(){ $messageLog=stripslashes($_POST['messageLog']); $chatLog= 'Drupal has processed this. Message log is: '.$messageLog; $chatLog=str_replace('":"{[{','":[{',$chatLog); $chatLog=str_replace(',,',',',$chatLog); $chatLog=str_replace('"}"','"}',$chatLog); $chatLog=str_replace('"}]}"','"}]',$chatLog); echo json_encode(array('messageLog' => $chatLog)); // echo $chatLog; echo print_r(privatemsg_new_thread(array(user_load(1)), 'The subject', 'The body text')); drupal_exit(); }

    Read the article

  • New cast exception with VS2010/.Net 4

    - by Trevor
    [ Updated 25 May 2010 ] I've recently upgraded from VS2008 to VS2010, and at the same time upgraded to .Net 4. I've recompiled an existing solution of mine and I'm encountering a Cast exception I did not have before. The structure of the code is simple (although the actual implementation somewhat more complicated). Basically I have: public class SomeClass : ISomeClass { // Stuff } public static class ClassFactory { public static IInterface GetClassInstance<IInterface>(Type classType) { return (IInterface)Activator.CreateInstance(classType); // This throws a cast exception } } // Call the factory with: ISomeClass anInstance = ClassFactory.GetClassInstance<ISomeClass>(typeof(SomeClass)); Ignore the 'sensibleness' of the above - its provides just a representation of the issue rather than the specifics of what I'm doing (e.g. constructor parameters have been removed). The marked line throws the exception: Unable to cast object of type 'Namespace.SomeClass' to type 'Namespace.ISomeClass'. I suspect it may have something to do with the additional DotNet security (and in particular, explicit loading of assemblies, as this is something my app does). The reason I suspect this is that I have had to add to the config file the setting: <runtime> <loadFromRemoteSources enabled="true" /> </runtime> .. but I'm unsure if this is related. Update I see (from comments) that my basic code does not reproduce the issue by itself. Not surprising I suppose. It's going to be tricky to identify which part of a largish 3-tier CQS system is relevant to this problem. One issue might be that there are multiple assemblies involved. My static class is actually a factory provider, and the 'SomeClass' is a class factory (relevant in that the factories are 'registered' within the app via explicit assembly/type loading - see below) . Upfront I use reflection to 'register' all factories (i.e. classes that implement a particular interface) and that I do this when the app starts by identifying the relevant assemblies, loading them and adding them to a cache using (in essence): Loop over (file in files) { Assembly assembly = Assembly.LoadFile(file); baseAssemblyList.Add(assembly); } Then I cache the available types in these assemblies with: foreach (Assembly assembly in _loadedAssemblyList) { Type[] assemblyTypes = assembly.GetTypes(); _loadedTypesCache.AddRange(assemblyTypes); } And then I use this cache to do a variety of reflection operations, including 'registering' of factories, which involves looping through all loaded (cached) types and finding those that implement the (base) Factory interface. I've experienced what may be a similar problem in the past (.Net 3.5, so not exactly the same) with an architecture that involved dynamically creating classes on the server and streaming the compiled binary of those classes to the client app. The problem came when trying to deserialize an instance of the dynamic class on the client from a remote call: the exception said the class type was not know, even though the source and destination types were exactly the same name (including namespace). Basically the cross boundry versions of the class were not recognised as being the same. I solved that by intercepting the deserialization process and explicitly defining the deseriazation class type in the context of the local assemblies. This experience is what makes me think the types are considered mismatched because (somehow) the interface of the actual SomeClass object, and the interface of passed into the Generic method are not considered the same type. So (possibly) my question for those more knowledgable about C#/DotNet is: How does the class loading work that somehow my app thinks there are two versions/types of the interface type and how can I fit that? [ whew ... anyone who got here is quite patient .. thanks ]

    Read the article

  • Should I use Spring or Guice for a Tomcat/Wicket/Hibernate project?

    - by Trevor Allred
    I'm building a new web application that uses Linux, Apache, Tomcat, Wicket, JPA/Hibernate, and MySQL. My primary need is Dependency Injection, which both Spring and Guice can do well. I think I need transaction support that would come with Spring and JTA but I'm not sure. The site will probably have about 20 pages and I'm not expect huge traffic. Should I use Spring or Guice? Feel free to ask and followup questions and I'll do my best to update this.

    Read the article

  • What is the worst real-world macros/pre-processor abuse you've ever come across?

    - by Trevor Boyd Smith
    What is the worst real-world macros/pre-processor abuse you've ever come across (please no contrived IOCCC answers *haha*)? Please add a short snippet or story if it is really entertaining. The goal is to teach something instead of always telling people "never use macros". p.s.: I've used macros before... but usually I get rid of them eventually when I have a "real" solution (even if the real solution is inlined so it becomes similar to a macro). Bonus: Give an example where the macro was really was better than a not-macro solution. Related question: When are C++ macros beneficial?

    Read the article

  • Uninstall Rails 3 with dependencies?

    - by Trevor Burnham
    I like that Rails 3 is so easy to install: gem install rails --pre, and all of the dependencies are automatically installed for you. But, what about uninstalling it? If I just do gem uninstall rails, I still have actionmailer (3.0.0.beta3) actionpack (3.0.0.beta3) activemodel (3.0.0.beta3) activerecord (3.0.0.beta3) activeresource (3.0.0.beta3) activesupport (3.0.0.beta3) which I want to get rid of. What's the easiest way to do so?

    Read the article

  • Performance hit from C++ style casts?

    - by Trevor Boyd Smith
    I am new to C++ style casts and I am worried that using C++ style casts will ruin the performance of my application because I have a real-time-critical deadline in my interrupt-service-routine. I heard that some casts will even throw exceptions! I would like to use the C++ style casts because it would make my code more "robust". However, if there is any performance hit then I will probably not use C++ style casts and will instead spend more time testing the code that uses C-style casts. Has anyone done any rigorous testing/profiling to compare the performance of C++ style casts to C style casts? What were your results? What conclusions did you draw?

    Read the article

  • How to stop listening on an HTTP::Daemon port in Perl

    - by Trevor
    I have a basic perl HTTP server using HTTP::Daemon. When I stop and start the script, it appears that the port is still being listened on and I get an error message saying that my HTTP::Daemon instance is undefined. If I try to start the script about a minute after it has stopped, it works fine and can bind to the port again. Is there any way to stop listening on the port when the program terminates instead of having to wait for it to timeout? use HTTP::Daemon; use HTTP::Status; my $d = new HTTP::Daemon(LocalAddr => 'localhost', LocalPort => 8000); while (my $c = $d->accept) { while (my $r = $c->get_request) { $c->send_error(RC_FORBIDDEN) } $c->close; undef($c); }

    Read the article

  • SQLite problem with some parameterized queries

    - by Trevor Balcom
    I am having some trouble using SQLite and parameterized queries with a few tables. I have noticed some queries using the "SELECT * FROM Table WHERE row=?" are returning 1 row when there should be more rows returned. If I change the parameterized query to "SELECT * FROM Table WHERE row='row'" then the correct number of rows is returned. Does anyone know why sqlite3_step would return only 1 row when using a parameterized query vs. using the same query in a traditional non-parameterized way? I am using a very thin C++ wrapper around SQLite3. I suspect there could be a problem with the wrapper, but this problem only exists on a few tables. It makes me wonder if there is something wrong with the way those tables are setup. Any advice is appreciated.

    Read the article

  • Math.min.apply(0, x) - why?

    - by Trevor Burnham
    I was just digging through some JavaScript code (Raphaël.js) and came across the following line (translated slightly): Math.min.apply(0, x) where x is an array. Why on earth would you do this? The behavior seems to be "take the min from the array x."

    Read the article

  • Undefined method 'total_entries' after upgrading Rails 2.2.2 to 2.3.5

    - by Trevor
    I am upgrading a Rails application from 2.2.2 to 2.3.5. The only remaining error is when I invoke total_entries for creating a jqgrid. Error: NoMethodError (undefined method `total_entries' for #<Array:0xbbe9ab0>) Code snippet: @route = Route.find( :all, :conditions => "id in (#{params[:id]})" ) { if params[:page].present? then paginate :page => params[:page], :per_page => params[:rows] order_by "#{params[:sidx]} #{params[:sord]}" end } respond_to do |format| format.html # show.html.erb format.xml { render :xml => @route } format.json { render :json => @route } format.jgrid { render :json => @route.to_jqgrid_json( [ :id, :name ], params[:page], params[:rows], @route.total_entries ) } end Any ideas? Thanks!

    Read the article

  • Html.EditorFor Global Template?

    - by Grant Trevor
    Is there any way to define a global template for the Html.EditorFor helper? I would like to alter the markup that is output so that for example instead of rendering <div class="editor-label"> <label .../> </div> <div class="editor-field"> <input .../> </div> It would render: <div> <div class="label"><label..../></div> <div class="field"><input..../></div> </div> This is for when I'm using Html.EditorFor with an object instance not just an object property.

    Read the article

  • Can I use RVM to maintain a single version of Ruby for all users?

    - by Trevor Burnham
    I love RVM. I realize that the main use case for it is letting different users switch between different versions of Ruby. But let's say I'm deploying a Rails app to a server and I just want a single version of Ruby running. In particular, I want 1.9.2, which is a breeze to install with RVM but a pain without it. Is there a way that I can say "I want this to be the canonical Ruby installation for all users" (along with all of its gems) without having to create a bunch of symlinks by hand and change them every time I update to a newer Ruby release?

    Read the article

  • Memory increases with Java UDP Server

    - by Trevor
    I have a simple UDP server that creates a new thread for processing incoming data. While testing it by sending about 100 packets/second I notice that it's memory usage continues to increase. Is there any leak evident from my code below? Here is the code for the server. public class UDPServer { public static void main(String[] args) { UDPServer server = new UDPServer(15001); server.start(); } private int port; public UDPServer(int port) { this.port = port; } public void start() { try { DatagramSocket ss = new DatagramSocket(this.port); while(true) { byte[] data = new byte[1412]; DatagramPacket receivePacket = new DatagramPacket(data, data.length); ss.receive(receivePacket); new DataHandler(receivePacket.getData()).start(); } } catch (IOException e) { e.printStackTrace(); } } } Here is the code for the new thread that processes the data. For now, the run() method doesn't do anything. public class DataHandler extends Thread { private byte[] data; public DataHandler(byte[] data) { this.data = data; } @Override public void run() { System.out.println("run"); } }

    Read the article

  • Rails modeling for a user

    - by Trevor Hartman
    When building a rails app that allows a User to login and create data, is it best to setup a belongs_to :user association on every single model? For example, let's say a user can create Favorites, Colors and Tags. And let's say Favorites has_many :tags and Colors also has_many :tags. Is it still important for Tags to belong_to :user assuming the User is the only person who has authority to edit those tags? And a similar question along the same lines: When updating data in FavoritesController, I've come to the conclusion that you perform CRUD operations by always doing something like User.favorites.find(params[:id].update_attributes(param[:favorite]) so that they can definitely only update models that belong to them. Right?

    Read the article

  • Why won't my code segfault on Windows 7?

    - by Trevor
    This is an unusual question to ask but here goes: In my code, I accidentally dereference NULL somewhere. But instead of the application crashing with a segfault, it seems to stop execution of the current function and just return control back to the UI. This makes debugging difficult because I would normally like to be alerted to the crash so I can attach a debugger. What could be causing this? Specifically, my code is an ODBC Driver (ie. a DLL). My test application is ODBC Test (odbct32w.exe) which allows me to explicitly call the ODBC API functions in my DLL. When I call one of the functions which has a known segfault, instead of crashing the application, ODBC Test simply returns control to the UI without printing the result of the function call. I can then call any function in my driver again. I do know that technically the application calls the ODBC driver manager which loads and calls the functions in my driver. But that is beside the point as my segfault (or whatever is happening) causes the driver manager function to not return either (as evidenced by the application not printing a result). One of my co-workers with a similar machine experiences this same problem while another does not but we have not been able to determine any specific differences.

    Read the article

  • Getting error: CS1061

    - by coure06
    Refer to http://stackoverflow.com/questions/369794/good-and-full-implementation-of-rss-feeds-in-asp-net-mvc Check the answer of Trevor de Koekkoek. I am getting this error CS1061: 'object' does not contain a definition for 'Items' and no extension method 'Items' accepting a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?)

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • Follow the How-To Geek Writers on Twitter

    - by The Geek
    Ever wonder what the How-To Geek writers are up to? If you’re a Twitter user, you can connect with us directly. We’ve also setup a new @howtogeeknews account if you just want to keep up with the latest articles. So if you want just the latest articles… click the image below and then click the Follow button. Otherwise, if you’d like to connect with the rest of us that actually use Twitter, you can follow each of us separately through  the links below. Note: Let’s try to stick to discussion, and leave the tech support questions for our forum. the How-To Geek (that’s me!) -  @howtogeek Matthew Guay – @maguay Trevor Bekolay – @TrevorBekolay Asian Angel – @asian_angel  Andrew Gehman – @andrewgehman Some of the HTG writers are not currently using Twitter… but I’m gonna list their accounts just in case you wanted to follow them. Mark Virtue – @markvirtue Mysticgeek – @mysticgeek  (He’s far too productive to waste time on Twitter!) Enjoy the conversation! Similar Articles Productive Geek Tips Got Awesome Geek Skills? The How-To Geek is Looking for WritersGot Awesome Skills? Why Not Write for How-To Geek?Integrate Twitter With Microsoft OutlookState of the Geek 2009: Behind the Scenes and Other GeekeryAnnouncing the How-To Geek Blogs TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses Mashpedia is a Real-time Encyclopedia

    Read the article

  • Slides and Pictures from PowerShell Saturday Columbus 2012

    - by Brian Jackett
    On March 10th, 2012 the first ever PowerShell Saturday conference took place in Columbus, OH and I couldn’t be happier with the outcome.  We had 100 attendees from 10 different states (the biggest surprise to me) come to see 6 speakers present on a variety of PowerShell topics: introduction, WMI, SharePoint, Active Directory, Exchange, 3rd party products and more.      A big thank you also goes out to a number of people. Planning committee Wes Stahler, lead organizer of PowerShell Saturday Columbus, president of Central Ohio PowerShell User Group Ed “Microsoft Scripting Guy” Wilson Teresa “The Scripting Wife” Wilson Ashley McGlone Brian T. Jackett (myself) Speakers Ed Wilson Ashley McGlone James Brundage Trevor Sullivon Daniel Cruz Volunteer Lisa Gardner, fellow Microsoft PFE volunteered her time on a Saturday to assist with smooth operation of the day Facility Coordination Debbie Carrier, facilities coordinator for the Columbus Microsoft Office and helped us out greatly with the venue   Slides and Script Samples    I presented my session on “PowerShell for the SharePoint 2010 Developer”.  Below you can download the slides and script samples.   Photos    I wasn’t able to take took many pictures (only 3) as I was busy doing my presentation, answering questions, and taking care of random items throughout the day.   Pictures on Facebook    click here Pictures on SkyDrive (higher res) PowerShell Saturday Columbus Mar '12 VIEW SLIDE SHOW DOWNLOAD ALL   Conclusion    I’m very happy that this first ever PowerShell Saturday was a success.  My fellow PFE and speaker Ashley McGlone also has a short write-up on his blog about the event (click here).  I have heard rumors that there are other cities starting to plan their own local events.  When I hear more details I’ll spread the word here and on Twitter.         -Frog Out

    Read the article

  • fast opening and closing connection with a specific port

    - by michale
    We have a Main application named "Trevor" installed in 2008R2 machine named "TEAMER12" which is slow now. One more application named "TVS" also running in and found there were many connections per second occurring to port 5009. netstat tool mentions that some fast connection open/close seen for port 5009 So first it will be listening mode like shown below TCP 0.0.0.0:5009 TEAMER12:0 LISTENING then establishes connection like TCP 127.0.0.1:5009 TEAMER12:49519 ESTABLISHED TCP 127.0.0.1:5009 TEAMER12:60903 ESTABLISHED After that iwill become TIME_WAIT and i could see several entries like shown below TCP 127.0.0.1:49156 TEAMER12:5009 TIME_WAIT after that it will establish connection like TCP 127.0.0.1:60903 TEAMER12:5009 ESTABLISHED TCP 127.0.0.1:64181 TEAMER12:microsoft-ds ESTABLISHED again it will go several entries like TIME_WAIT TCP 127.0.0.1:49156 TEAMER12:5009 TIME_WAIT Finally it will establish like this TCP 172.26.127.40:139 TEAMER12:0 LISTENING TCP 172.26.127.42:139 TEAMER12:0 LISTENING TCP 172.26.127.42:5009 TEAMER12:64445 ESTABLISHED TCP 172.26.127.42:64445 TEAMER12:5009 ESTABLISHED Can any body tell me whats the reason behind why many connections per second occurring to port 5009 and why application slow?

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >