Search Results

Search found 968 results on 39 pages for 'lambda'.

Page 25/39 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • new JDK 7 features

    - by xdevel2000
    I wish to test the new features that will came with the next JDK like project coin, project lambda etc. but the last JDK 7 to download will not have any already implemented! From which build can I test them? I think it's incredible that, now in may 2010 at few months to the official final release (november 2010????) for we developers there is no possibility to test any of this features!!

    Read the article

  • Is there an available build demonstrating new JDK 7 features?

    - by xdevel2000
    I wish to test the new features that will came with the next JDK like project coin, project lambda etc. but the last JDK 7 to download will not have any already implemented! From which build can I test them? I think it's incredible that, now in may 2010 at few months to the official final release (november 2010????) for we developers there is no possibility to test any of this features!!

    Read the article

  • How can I tell the number of replacements in a formatter string?

    - by sdanna
    Given the following method: (real method has a few more parameters, but the important ones are below...) public string DoSomething(string formatter, params string[] values) { // Do something eventually involving a call to String.Format(formatter, values); } Is there a way to tell if my values array has enough objects in it to cover the formatter, so that I can throw an exception if there aren't (short of doing the string.Format; that isn't an option until the end due to some lambda conversions)?

    Read the article

  • Dynamic class_name for has_many relations

    - by vooD
    I'm trying to make has_many relation with dynamic class_name attribute class Category < ActiveRecord::Base has_many :ads, :class_name => ( lambda { return self.item_type } ) end or class Category < ActiveRecord::Base has_many :ads, :class_name => self.item_type end But i got errors: can't convert Proc into String or undefined method `item_type' for #<Class:0xb62c6c88> Thank you for any help!

    Read the article

  • Rewriting a statement using LINQ(C#)

    - by Thinking
    Is it possible to write the folowing using lambda(C#) private static void GetRecordList(List<CustomerInfo> lstCustinfo) { for (int i = 1; i <= 5; i++) { if (i % 2 == 0) lstCustinfo.Add(new CustomerInfo { CountryCode = "USA", CustomerAddress = "US Address" + i.ToString(), CustomerName = "US Customer Name" + i.ToString(), ForeignAmount = i * 50 }); else lstCustinfo.Add(new CustomerInfo { CountryCode = "UK", CustomerAddress = "UK Address" + i.ToString(), CustomerName = "UK Customer Name" + i.ToString(), ForeignAmount = i * 80 }); } }

    Read the article

  • Need help with yum,python and php in CentOS. (I made a complete mess!)

    - by pek
    a while back I wanted to install some plugins for Trac but it required python 2.5 I tried installing it (I don't remember how) and the only thing I managed was to have two versions of python (2.4 and 2.5). Trac still uses the old version but the console uses 2.5 (python -V = Python 2.5.2). Anyway, the problem is not python, the problem is yum (which uses python). I am trying to upgrade my PHP version from 5.1.x to 5.2.x. I tried following this tutorial but when I reach the step with yum I get this error: >[root@XXX]# yum update Loading "installonlyn" plugin Setting up Update Process Setting up repositories Reading repository metadata in from local files Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.main(sys.argv[1:]) File "/usr/share/yum-cli/yummain.py", line 94, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 381, in doCommands return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds) File "/usr/share/yum-cli/yumcommands.py", line 150, in doCommand return base.updatePkgs(extcmds) File "/usr/share/yum-cli/cli.py", line 672, in updatePkgs self.doRepoSetup() File "/usr/share/yum-cli/cli.py", line 109, in doRepoSetup self.doSackSetup(thisrepo=thisrepo) File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 338, in doSackSetup self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 200, in populateSack sack.populate(repo, with, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 91, in populate dobj = repo.cacheHandler.getPrimary(xml, csum) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 100, in getPrimary return self._getbase(location, checksum, 'primary') File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 86, in _getbase (db, dbchecksum) = self.getDatabase(location, metadatatype) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 82, in getDatabase db = self.makeSqliteCacheFile(filename,cachetype) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 245, in makeSqliteCacheFile self.createTablesPrimary(db) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 165, in createTablesPrimary cur.execute(q) File "/usr/lib/python2.4/site-packages/sqlite/main.py", line 244, in execute self.rs = self.con.db.execute(SQL) _sqlite.DatabaseError: near "release": syntax error Any help? Thank you. Update OK, so I've managed to update yum hoping it would solve my problems but now I get a slightly different version of the same error: [root@XXX]# yum -y update Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.skiplink.com * base: www.gtlib.gatech.edu * epel: mirrors.tummy.com * extras: yum.singlehop.com * updates: centos-distro.cavecreek.net (process:30840): GLib-CRITICAL **: g_timer_stop: assertion `timer != NULL' failed (process:30840): GLib-CRITICAL **: g_timer_destroy: assertion `timer != NULL' failed Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 309, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 178, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 345, in doCommands self._getTs(needTsRemove) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in _getTs self._getTsInfo(remove_only) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in _getTsInfo pkgSack = self.pkgSack File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 661, in <lambda> pkgSack = property(fget=lambda self: self._getSacks(), File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 501, in _getSacks self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 260, in populateSack sack.populate(repo, mdtype, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 190, in populate dobj = repo_cache_function(xml, csum) File "/usr/lib/python2.4/site-packages/sqlitecachec.py", line 42, in getPrimary self.repoid)) TypeError: Can not create packages table: near "release": syntax error I'm guessing that this "release" thing has something to do with a repository, but I didn't find anything... I went to the sqlitecachec.py at line 42 which writes (line numbers added for convenience): 39: return self.open_database(_sqlitecache.update_primary(location, 40: checksum, 41: self.callback, 42: self.repoid)) Update 2 I think I found the problem. This post suggests that the problem is sqlite and not yum. The version of sqlite I have installed is 3.6.10 but I have no idea which version does python 2.4 uses. ld.so.config contains the following: include ld.so.conf.d/*.conf /usr/local/lib In folder /usr/local/lib I find a symbolic link named libsqlite3.so that points to libsqlite3.so.0.8.6 WHAT IS HAPPENING??????? :S

    Read the article

  • How to install pip/easy_install on debian 6 for python3.2

    - by atomAltera
    I'm trying to install pip or setup tools form python 3.2 in debian 6. First case: apt-get install python3-pip...OK python3 easy_install.py webob Searching for webob Reading http://pypi.python.org/simple/webob/ Reading http://webob.org/ Reading http://pythonpaste.org/webob/ Best match: WebOb 1.2.2 Downloading http://pypi.python.org/packages/source/W/WebOb/WebOb-1.2.2.zip#md5=de0f371b46554709ce5b93c088a11cae Processing WebOb-1.2.2.zip Traceback (most recent call last): File "easy_install.py", line 5, in <module> main() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1931, in main with_ei_usage(lambda: File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1912, in with_ei_usage return f() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1935, in <lambda> distclass=DistributionWithoutHelpCommands, **kw File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 368, in run self.easy_install(spec, not self.no_deps) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 608, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 638, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 799, in install_eggs unpack_archive(dist_filename, tmpdir, self.unpack_progress) File "/usr/lib/python3/dist-packages/setuptools/archive_util.py", line 67, in unpack_archive driver(filename, extract_dir, progress_filter) File "/usr/lib/python3/dist-packages/setuptools/archive_util.py", line 154, in unpack_zipfile data = z.read(info.filename) File "/usr/local/lib/python3.2/zipfile.py", line 891, in read with self.open(name, "r", pwd) as fp: File "/usr/local/lib/python3.2/zipfile.py", line 980, in open close_fileobj=not self._filePassed) File "/usr/local/lib/python3.2/zipfile.py", line 489, in __init__ self._decompressor = zlib.decompressobj(-15) AttributeError: 'NoneType' object has no attribute 'decompressobj' Second case: from http://pypi.python.org/pypi/distribute#installation-instructions python3 distribute_setup.py Downloading http://pypi.python.org/packages/source/d/distribute/distribute-0.6.28.tar.gz Extracting in /tmp/tmpv6iei2 Traceback (most recent call last): File "distribute_setup.py", line 515, in <module> main(sys.argv[1:]) File "distribute_setup.py", line 511, in main _install(tarball, _build_install_args(argv)) File "distribute_setup.py", line 73, in _install tar = tarfile.open(tarball) File "/usr/local/lib/python3.2/tarfile.py", line 1746, in open raise ReadError("file could not be opened successfully") tarfile.ReadError: file could not be opened successfully Third case: from http://pypi.python.org/pypi/distribute#installation-instructions tar -xzvf distribute-0.6.28.tar.gz cd distribute-0.6.28 python3 setup.py install Before install bootstrap. Scanning installed packages No setuptools distribution found running install running bdist_egg running egg_info writing distribute.egg-info/PKG-INFO writing top-level names to distribute.egg-info/top_level.txt writing dependency_links to distribute.egg-info/dependency_links.txt writing entry points to distribute.egg-info/entry_points.txt reading manifest file 'distribute.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'distribute.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py copying distribute.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO creating 'dist/distribute-0.6.28-py3.2.egg' and adding 'build/bdist.linux-x86_64/egg' to it Traceback (most recent call last): File "setup.py", line 220, in <module> scripts = scripts, File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "build/src/setuptools/command/install.py", line 73, in run self.do_egg_install() File "build/src/setuptools/command/install.py", line 93, in do_egg_install self.run_command('bdist_egg') File "/usr/local/lib/python3.2/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "build/src/setuptools/command/bdist_egg.py", line 241, in run dry_run=self.dry_run, mode=self.gen_header()) File "build/src/setuptools/command/bdist_egg.py", line 542, in make_zipfile z = zipfile.ZipFile(zip_filename, mode, compression=compression) File "/usr/local/lib/python3.2/zipfile.py", line 689, in __init__ "Compression requires the (missing) zlib module") RuntimeError: Compression requires the (missing) zlib module zlib1g-dev installed Help me please

    Read the article

  • Refactoring a Single Rails Model with large methods & long join queries trying to do everything

    - by Kelseydh
    I have a working Ruby on Rails Model that I suspect is inefficient, hard to maintain, and full of unnecessary SQL join queries. I want to optimize and refactor this Model (Quiz.rb) to comply with Rails best practices, but I'm not sure how I should do it. The Rails app is a game that has Missions with many Stages. Users complete Stages by answering Questions that have correct or incorrect Answers. When a User tries to complete a stage by answering questions, the User gets a Quiz entry with many Attempts. Each Attempt records an Answer submitted for that Question within the Stage. A user completes a stage or mission by getting every Attempt correct, and their progress is tracked by adding a new entry to the UserMission & UserStage join tables. All of these features work, but unfortunately the Quiz.rb Model has been twisted to handle almost all of it exclusively. The callbacks began at 'Quiz.rb', and because I wasn't sure how to leave the Quiz Model during a multi-model update, I resorted to using Rails Console to have the @quiz instance variable via self.some_method do all the heavy lifting to retrieve every data value for the game's business logic; resulting in large extended join queries that "dance" all around the Database schema. The Quiz.rb Model that Smells: class Quiz < ActiveRecord::Base belongs_to :user has_many :attempts, dependent: :destroy before_save :check_answer before_save :update_user_mission_and_stage accepts_nested_attributes_for :attempts, :reject_if => lambda { |a| a[:answer_id].blank? }, :allow_destroy => true #Checks every answer within each quiz, adding +1 for each correct answer #within a stage quiz, and -1 for each incorrect answer def check_answer stage_score = 0 self.attempts.each do |attempt| if attempt.answer.correct? == true stage_score += 1 elsif attempt.answer.correct == false stage_score - 1 end end stage_score end def winner return true end def update_user_mission_and_stage ####### #Step 1: Checks if UserMission exists, finds or creates one. #if no UserMission for the current mission exists, creates a new UserMission if self.user_has_mission? == false @user_mission = UserMission.new(user_id: self.user.id, mission_id: self.current_stage.mission_id, available: true) @user_mission.save else @user_mission = self.find_user_mission end ####### #Step 2: Checks if current UserStage exists, stops if true to prevent duplicate entry if self.user_has_stage? @user_mission.save return true else ####### ##Step 3: if step 2 returns false: ##Initiates UserStage creation instructions #checks for winner (winner actions need to be defined) if they complete last stage of last mission for a given orientation if self.passed? && self.is_last_stage? && self.is_last_mission? create_user_stage_and_update_user_mission self.winner #NOTE: The rest are the same, but specify conditions that are available to add badges or other actions upon those conditions occurring: ##if user completes first stage of a mission elsif self.passed? && self.is_first_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first stage of first mission self.user.add_badge(5) self.user.activity_logs.create(description: "granted first-stage badge", type_event: "badge", value: "first-stage") #If user completes last stage of a given mission, creates a new UserMission elsif self.passed? && self.is_last_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first mission self.user.add_badge(6) self.user.activity_logs.create(description: "granted first-mission badge", type_event: "badge", value: "first-mission") elsif self.passed? create_user_stage_and_update_user_mission else self.passed? == false return true end end end #Creates a new UserStage record in the database for a successful Quiz question passing def create_user_stage_and_update_user_mission @nu_stage = @user_mission.user_stages.new(user_id: self.user.id, stage_id: self.current_stage.id) @nu_stage.save @user_mission.save self.user.add_points(50) end #Boolean that defines passing a stage as answering every question in that stage correct def passed? self.check_answer >= self.number_of_questions end #Returns the number of questions asked for that stage's quiz def number_of_questions self.attempts.first.answer.question.stage.questions.count end #Returns the current_stage for the Quiz, routing through 1st attempt in that Quiz def current_stage self.attempts.first.answer.question.stage end #Gives back the position of the stage relative to its mission. def stage_position self.attempts.first.answer.question.stage.position end #will find the user_mission for the current user and stage if it exists def find_user_mission self.user.user_missions.find_by_mission_id(self.current_stage.mission_id) end #Returns true if quiz was for the last stage within that mission #helpful for triggering actions related to a user completing a mission def is_last_stage? self.stage_position == self.current_stage.mission.stages.last.position end #Returns true if quiz was for the first stage within that mission #helpful for triggering actions related to a user completing a mission def is_first_stage? self.stage_position == self.current_stage.mission.stages_ordered.first.position end #Returns true if current user has a UserMission for the current stage def user_has_mission? self.user.missions.ids.include?(self.current_stage.mission.id) end #Returns true if current user has a UserStage for the current stage def user_has_stage? self.user.stages.include?(self.current_stage) end #Returns true if current user is on the last mission based on position within a given orientation def is_first_mission? self.user.missions.first.orientation.missions.by_position.first.position == self.current_stage.mission.position end #Returns true if current user is on the first stage & mission of a given orientation def is_last_mission? self.user.missions.first.orientation.missions.by_position.last.position == self.current_stage.mission.position end end My Question Currently my Rails server takes roughly 500ms to 1 sec to process single @quiz.save action. I am confident that the slowness here is due to sloppy code, not bad Database ERD design. What does a better solution look like? And specifically: Should I use join queries to retrieve values like I did here, or is it better to instantiate new objects within the model instead? Or am I missing a better solution? How should update_user_mission_and_stage be refactored to follow best practices? Relevant Code for Reference: quizzes_controller.rb w/ Controller Route Initiating Callback: class QuizzesController < ApplicationController before_action :find_stage_and_mission before_action :find_orientation before_action :find_question def show end def create @user = current_user @quiz = current_user.quizzes.new(quiz_params) if @quiz.save if @quiz.passed? if @mission.next_mission.nil? && @stage.next_stage.nil? redirect_to root_path, notice: "Congratulations, you have finished the last mission!" elsif @stage.next_stage.nil? redirect_to [@mission.next_mission, @mission.first_stage], notice: "Correct! Time for Mission #{@mission.next_mission.position}", info: "Starting next mission" else redirect_to [@mission, @stage.next_stage], notice: "Answer Correct! You passed the stage!" end else redirect_to [@mission, @stage], alert: "You didn't get every question right, please try again." end else redirect_to [@mission, @stage], alert: "Sorry. We were unable to save your answer. Please contact the admministrator." end @questions = @stage.questions.all end private def find_stage_and_mission @stage = Stage.find(params[:stage_id]) @mission = @stage.mission end def find_question @question = @stage.questions.find_by_id params[:id] end def quiz_params params.require(:quiz).permit(:user_id, :attempt_id, {attempts_attributes: [:id, :quiz_id, :answer_id]}) end def find_orientation @orientation = @mission.orientation @missions = @orientation.missions.by_position end end Overview of Relevant ERD Database Relationships: Mission - Stage - Question - Answer - Attempt <- Quiz <- User Mission - UserMission <- User Stage - UserStage <- User Other Models: Mission.rb class Mission < ActiveRecord::Base belongs_to :orientation has_many :stages has_many :user_missions, dependent: :destroy has_many :users, through: :user_missions #SCOPES scope :by_position, -> {order(position: :asc)} def stages_ordered stages.order(:position) end def next_mission self.orientation.missions.find_by_position(self.position.next) end def first_stage next_mission.stages_ordered.first end end Stage.rb: class Stage < ActiveRecord::Base belongs_to :mission has_many :questions, dependent: :destroy has_many :user_stages, dependent: :destroy has_many :users, through: :user_stages accepts_nested_attributes_for :questions, reject_if: :all_blank, allow_destroy: true def next_stage self.mission.stages.find_by_position(self.position.next) end end Question.rb class Question < ActiveRecord::Base belongs_to :stage has_many :answers, dependent: :destroy accepts_nested_attributes_for :answers, :reject_if => lambda { |a| a[:body].blank? }, :allow_destroy => true end Answer.rb: class Answer < ActiveRecord::Base belongs_to :question has_many :attempts, dependent: :destroy end Attempt.rb: class Attempt < ActiveRecord::Base belongs_to :answer belongs_to :quiz end User.rb: class User < ActiveRecord::Base belongs_to :school has_many :activity_logs has_many :user_missions, dependent: :destroy has_many :missions, through: :user_missions has_many :user_stages, dependent: :destroy has_many :stages, through: :user_stages has_many :orientations, through: :school has_many :quizzes, dependent: :destroy has_many :attempts, through: :quizzes def latest_stage_position self.user_missions.last.user_stages.last.stage.position end end UserMission.rb class UserMission < ActiveRecord::Base belongs_to :user belongs_to :mission has_many :user_stages, dependent: :destroy end UserStage.rb class UserStage < ActiveRecord::Base belongs_to :user belongs_to :stage belongs_to :user_mission end

    Read the article

  • CodePlex Daily Summary for Monday, July 15, 2013

    CodePlex Daily Summary for Monday, July 15, 2013Popular ReleasesMVC Forum: MVC Forum v1.0: Finally version 1.0 is here! We have been fixing a few bugs, and making sure the release is as stable as possible. We have also changed the way configuration of the application works, mostly how to add your own code or replace some of the standard code with your own. If you download and use our software, please give us some sort of feedback, good or bad!SharePoint 2013 TypeScript Definitions: Release 1.1: TypeScript 0.9 support SharePoint TypeScript Definitions are now compliant with the new version of TypeScript TypeScript generics are now used for defining collection classes (derivatives of SP.ClientCollection object) Improved coverage Added mQuery definitions (m$) Added SPClientAutoFill definitions SP.Utilities namespace is now fully covered SP.UI modal dialog definitions improved CSR definitions improved, added some missing methods and context properties, mostly related to list ...GoAgent GUI: GoAgent GUI ??? 1.0.0: ????GoAgent GUI????,???????????.Net Framework 4.0 ???????: Windows 8 x64 Windows 8 x86 Windows 7 x64 Windows 7 x86 ???????: Windows 8.1 Preview (x86/x64) ???????: Windows XP ????: ????????GoAgent????,????????,?????????????,???????????????????,??????????,????。PiGraph: PiGraph 2.0.8.13: C?p nh?t:Các l?i dã s?a: S?a l?i không nh?p du?c s? âm. L?i tabindex trong giao di?n Thêm hàm Các l?i chua kh?c ph?c: L?i ghi chú nh?p nháy màu. L?i khung ghi chú vu?t ra kh?i biên khi luu file. Luu ý:N?u không kh?i d?ng duoc chuong trình, b?n nên c?p nh?t driver card d? h?a phiên b?n m?i nh?t: AMD Graphics Drivers NVIDIA Driver Xem yêu c?u h? th?ngD3D9Client: D3D9Client R12 for Orbiter Beta: D3D9Client release for orbiter BetaVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...Project Server 2013 Event Handler Admin Tool: PSI Event Admin Tool: Download & exract the File. Use LoggerAdmin to upload the event handlers in project server 2013. PSIEventLogger\LoggerAdmin\bin\DebugGherkin editor: Gherkin Editor Beta 2: Fix issue #7 and add some refactoring and code cleanupNew-NuGetPackage PowerShell Script: New-NuGetPackage.ps1 PowerShell Script v1.2: Show nuget gallery to push to when prompting user if they want to push their package.Site Templates By Steve: SharePoint 2010 CORE Site Theme By Steve WSP: Great Site Theme to start with from Steve. See project home page for install instructions. This is a nice centered, mega-menu, fixed width masterpage with styles. Remember to update the mega menu lists.SharePoint Solution Installer: SharePoint Solution Installer V1.2.8: setup2013.exe now supports CompatibilityLevel to target specific hive Use setup.exe for SP2007 & SP2010. Use setup2013.exe for SP2013.TBox - tool to make developer's life easier.: TBox 1.021: 1)Add console unit tests runner, to run unit tests in parallel from console. Also this new sub tool can save valid xml nunit report. It can help you in continuous integration. 2)Fix build scripts.LifeInSharepoint Modern UI Update: Version 2: Some minor improvements, including Audience Targeting support for quick launch links. Also removing all NextDocs references.Virtual Photonics: VTS MATLAB Package 1.0.13 Beta: This is the beta release of the MATLAB package with examples of using the VTS libraries within MATLAB. Changes for this release: Added two new examples to vts_solver_demo.m that: 1) generates and plots R(lambda) at a given rho, and chromophore concentrations assuming a power law for scattering, and 2) solves inverse problem for R(lambda) at given rho. This example solves for concentrations of HbO2, Hb and H20 given simulated measurements created using Nurbs scaled Monte Carlo and inverted u...Advanced Resource Tab for Blend: Advanced Resource Tab: This is the first alpha release of the advanced resource tab for Blend for Visual Studio 2012.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.96: Fix for issue #19957: EXE should output the name of the file(s) being minified. Discussion #449181: throw a Sev-2 warning when trailing commas are detected on an Array literal. Perfectly legal to do so, but the behavior ends up working differently on different browsers, so throw a cross-browser warning. Add a few more known global names for improved ES6 compatibility update Nuget package to version 2.5 and automatically add the AjaxMin.targets to your project when you update the package...Outlook 2013 Add-In: Categories and Colors: This new version has a major change in the drawing of the list items: - Using owner drawn code to format the appointments using GDI (some flickering may occur, but it looks a little bit better IMHO, with separate sections). - Added category color support (if more than one category, only one color will be shown). Here, the colors Outlook uses are slightly different than the ones available in System.Drawing, so I did a first approach matching. ;-) - Added appointment status support (to show fr...Columbus Remote Desktop: 2.0 Sapphire: Added configuration settings Added update notifications Added ability to disable GPU acceleration Fixed connection bugsLINQ to Twitter: LINQ to Twitter v2.1.07: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.DotNetNuke® Community Edition CMS: 06.02.08: Major Highlights Fixed issue where the application throws an Unhandled Error and an HTTP Response Code of 200 when the connection to the database is lost. Security FixesNone Updated Modules/Providers ModulesNone ProvidersNoneNew Projects[.Net Intl] harroc_c;mallar_a;olouso_f: The goal of this project is to create a web crawler and a web front who allows you to search in your index. You will create a mini (or large!) search engine basButterfly Storage: Butterfly Storage is a data access technology based on object-oriented database model for Windows Store applications.KaveCompany: KaveCompleave that girl alone: a team project!MyClrProfiler: This project helps you learn about and develop your own CLR profiler.NETDeob: Deobfuscate obfuscated .NET files easilyProgram stomatologie: SummarySimple Graph Library: Simple portable class library for graphs data structures. .NET, Silverlight 4/5, Windows Phone, Windows RT, Xbox 360T6502 Emulator: T6502 is a 6502 emulator written in TypeScript using AngularJS. The goal is well-organized, readable code over performance.WP8 File Access Webserver: C# HTTP server and web application on Windows Phone 8. Implements file access, browsing and downloading.wpadk: wpadk????wp7?????? ?????????,?????、SDK、wpadk?????????????。??????????????????。??????????????????,????wpadk?????????????????????????????????????。xlmUnit: xlmUnit, Unit Testing

    Read the article

  • Scheduling thread tiles with C++ AMP

    - by Daniel Moth
    This post assumes you are totally comfortable with, what some of us call, the simple model of C++ AMP, i.e. you could write your own matrix multiplication. We are now ready to explore the tiled model, which builds on top of the non-tiled one. Tiling the extent We know that when we pass a grid (which is just an extent under the covers) to the parallel_for_each call, it determines the number of threads to schedule and their index values (including dimensionality). For the single-, two-, and three- dimensional cases you can go a step further and subdivide the threads into what we call tiles of threads (others may call them thread groups). So here is a single-dimensional example: extent<1> e(20); // 20 units in a single dimension with indices from 0-19 grid<1> g(e);      // same as extent tiled_grid<4> tg = g.tile<4>(); …on the 3rd line we subdivided the single-dimensional space into 5 single-dimensional tiles each having 4 elements, and we captured that result in a concurrency::tiled_grid (a new class in amp.h). Let's move on swiftly to another example, in pictures, this time 2-dimensional: So we start on the left with a grid of a 2-dimensional extent which has 8*6=48 threads. We then have two different examples of tiling. In the first case, in the middle, we subdivide the 48 threads into tiles where each has 4*3=12 threads, hence we have 2*2=4 tiles. In the second example, on the right, we subdivide the original input into tiles where each has 2*2=4 threads, hence we have 4*3=12 tiles. Notice how you can play with the tile size and achieve different number of tiles. The numbers you pick must be such that the original total number of threads (in our example 48), remains the same, and every tile must have the same size. Of course, you still have no clue why you would do that, but stick with me. First, we should see how we can use this tiled_grid, since the parallel_for_each function that we know expects a grid. Tiled parallel_for_each and tiled_index It turns out that we have additional overloads of parallel_for_each that accept a tiled_grid instead of a grid. However, those overloads, also expect that the lambda you pass in accepts a concurrency::tiled_index (new in amp.h), not an index<N>. So how is a tiled_index different to an index? A tiled_index object, can have only 1 or 2 or 3 dimensions (matching exactly the tiled_grid), and consists of 4 index objects that are accessible via properties: global, local, tile_origin, and tile. The global index is the same as the index we know and love: the global thread ID. The local index is the local thread ID within the tile. The tile_origin index returns the global index of the thread that is at position 0,0 of this tile, and the tile index is the position of the tile in relation to the overall grid. Confused? Here is an example accompanied by a picture that hopefully clarifies things: array_view<int, 2> data(8, 6, p_my_data); parallel_for_each(data.grid.tile<2,2>(), [=] (tiled_index<2,2> t_idx) restrict(direct3d) { /* todo */ }); Given the code above and the picture on the right, what are the values of each of the 4 index objects that the t_idx variables exposes, when the lambda is executed by T (highlighted in the picture on the right)? If you can't work it out yourselves, the solution follows: t_idx.global       = index<2> (6,3) t_idx.local          = index<2> (0,1) t_idx.tile_origin = index<2> (6,2) t_idx.tile             = index<2> (3,1) Don't move on until you are comfortable with this… the picture really helps, so use it. Tiled Matrix Multiplication Example – part 1 Let's paste here the C++ AMP matrix multiplication example, bolding the lines we are going to change (can you guess what the changes will be?) 01: void MatrixMultiplyTiled_Part1(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M, N, vC); 07: parallel_for_each(c.grid, 08: [=](index<2> idx) restrict(direct3d) { 09: 10: int row = idx[0]; int col = idx[1]; 11: float sum = 0.0f; 12: for(int i = 0; i < W; i++) 13: sum += a(row, i) * b(i, col); 14: c[idx] = sum; 15: }); 16: } To turn this into a tiled example, first we need to decide our tile size. Let's say we want each tile to be 16*16 (which assumes that we'll have at least 256 threads to process, and that c.grid.extent.size() is divisible by 256, and moreover that c.grid.extent[0] and c.grid.extent[1] are divisible by 16). So we insert at line 03 the tile size (which must be a compile time constant). 03: static const int TS = 16; ...then we need to tile the grid to have tiles where each one has 16*16 threads, so we change line 07 to be as follows 07: parallel_for_each(c.grid.tile<TS,TS>(), ...that means that our index now has to be a tiled_index with the same characteristics as the tiled_grid, so we change line 08 08: [=](tiled_index<TS, TS> t_idx) restrict(direct3d) { ...which means, without changing our core algorithm, we need to be using the global index that the tiled_index gives us access to, so we insert line 09 as follows 09: index<2> idx = t_idx.global; ...and now this code just works and it is tiled! Closing thoughts on part 1 The process we followed just shows the mechanical transformation that can take place from the simple model to the tiled model (think of this as step 1). In fact, when we wrote the matrix multiplication example originally, the compiler was doing this mechanical transformation under the covers for us (and it has additional smarts to deal with the cases where the total number of threads scheduled cannot be divisible by the tile size). The point is that the thread scheduling is always tiled, even when you use the non-tiled model. But with this mechanical transformation, we haven't gained anything… Hint: our goal with explicitly using the tiled model is to gain even more performance. In the next post, we'll evolve this further (beyond what the compiler can automatically do for us, in this first release), so you can see the full usage of the tiled model and its benefits… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Tomcat memory usage grows until crash with no GC run

    - by Phil
    I'm administrating a server running Tomcat that is getting a lot of traffic lately. If I monitor memory usage in Task Manager I can see the memory usage growing and eventually tomcat crashes around the 1GB mark. Here's the memory relevent bits I've set in Tomcat Properties (this is a Windows Server): Intial memory pool: 1024 MB Maximum memory pool: 1024 MB -XX:MaxPermSize=256M The weird thing is since these problems arose I've deployed Lambda Probe to the Tomcat instance and the memory usage values I see there are much lower, for example Task Manager might show 467MB used while the "Total" used in Probe is 212 MB. Also, the Maximum Total listed in Probe is 1.29GB, when I would have expected 1GB, the maximum memory set above. If I force the garbage collector to run using Probe, I can keep Tomcat from crashing for a while (indefinitely, AFAIK). So why doesn't the GC run automatically and stop Tomcat from crashing? Thanks.

    Read the article

  • Braces (syntax) highlighting in OpenOffice Math formula text editor

    - by Oleksandr Bolotov
    When you use OpenOffice Math, in upper part you see formula and formula text editor in lower part. Almost like this: %sigma = 2 %mu %epsilon + %lambda Tr(%epsilon)I So my questions are: How to replace OpenOffice Math's formula text editor with own text editor? ... or how to enable braces (syntax) highlighting in embedded editor? ... are there any extensions for anything like this? I need this because sometimes it's too much braces and stuff and it's hard to distinguish which braces match each other. Please do not suggest me to use MathType Mathematica (or anything) instead of OpenOffice Math (because I'm almost happy with it:)

    Read the article

  • Getting rid of GNU Emacs's menu bar in terminal windows

    - by Ernest A
    How to get rid of Emacs's menu bar in terminal windows? The standard answer is to put (when (not (display-graphic-p)) (menu-bar-mode -1)) in init.el. However, this solution is not good, because all it does is remove the menu bar after the fact. You can still see it for a split second. It's very annoying. Looking at the source code in startup.el I don't see an obvious solution to this problem. I think the only way is to use before-init-hook. Maybe this could do the trick? (add-hook 'before-init-hook (lambda () (setq emacs-basic-display t))) But this hook is run before init.el and other init files are evaluated, so how is one supposed to use it?

    Read the article

  • overriding ctype<wchar_t>

    - by Potatoswatter
    I'm writing a lambda calculus interpreter for fun and practice. I got iostreams to properly tokenize identifiers by adding a ctype facet which defines punctuation as whitespace: struct token_ctype : ctype<char> { mask t[ table_size ]; token_ctype() : ctype<char>( t ) { for ( size_t tx = 0; tx < table_size; ++ tx ) { t[tx] = isalnum( tx )? alnum : space; } } }; (classic_table() would probably be cleaner but that doesn't work on OS X!) And then swap the facet in when I hit an identifier: locale token_loc( in.getloc(), new token_ctype ); … locale const &oldloc = in.imbue( token_loc ); in.unget() >> token; in.imbue( oldloc ); There seems to be surprisingly little lambda calculus code on the Web. Most of what I've found so far is full of unicode ? characters. So I thought to try adding Unicode support. But ctype<wchar_t> works completely differently from ctype<char>. There is no master table; there are four methods do_is x2, do_scan_is, and do_scan_not. So I did this: struct token_ctype : ctype< wchar_t > { typedef ctype<wchar_t> base; bool do_is( mask m, char_type c ) const { return base::do_is(m,c) || (m&space) && ( base::do_is(punct,c) || c == L'?' ); } const char_type* do_is (const char_type* lo, const char_type* hi, mask* vec) const { base::do_is(lo,hi,vec); for ( mask *vp = vec; lo != hi; ++ vp, ++ lo ) { if ( *vp & punct || *lo == L'?' ) *vp |= space; } return hi; } const char_type *do_scan_is (mask m, const char_type* lo, const char_type* hi) const { if ( m & space ) m |= punct; hi = do_scan_is(m,lo,hi); if ( m & space ) hi = find( lo, hi, L'?' ); return hi; } const char_type *do_scan_not (mask m, const char_type* lo, const char_type* hi) const { if ( m & space ) { m |= punct; while ( * ( lo = base::do_scan_not(m,lo,hi) ) == L'?' && lo != hi ) ++ lo; return lo; } return base::do_scan_not(m,lo,hi); } }; (Apologies for the flat formatting; the preview converted the tabs differently.) The code is WAY less elegant. I does better express the notion that only punctuation is additional whitespace, but that would've been fine in the original had I had classic_table. Is there a simpler way to do this? Do I really need all those overloads? (Testing showed do_scan_not is extraneous here, but I'm thinking more broadly.) Am I abusing facets in the first place? Is the above even correct? Would it be better style to implement less logic?

    Read the article

  • Matlab: Optimization by perturbing variable

    - by S_H
    My main script contains following code: %# Grid and model parameters nModel=50; nModel_want=1; nI_grid1=5; Nth=1; nRow.Scale1=5; nCol.Scale1=5; nRow.Scale2=5^2; nCol.Scale2=5^2; theta = 90; % degrees a_minor = 2; % range along minor direction a_major = 5; % range along major direction sill = var(reshape(Deff_matrix_NthModel,nCell.Scale1,1)); % variance of the coarse data matrix of size nRow.Scale1 X nCol.Scale1 %# Covariance computation % Scale 1 for ihRow = 1:nRow.Scale1 for ihCol = 1:nCol.Scale1 [cov.Scale1(ihRow,ihCol),heff.Scale1(ihRow,ihCol)] = general_CovModel(theta, ihCol, ihRow, a_minor, a_major, sill, 'Exp'); end end % Scale 2 for ihRow = 1:nRow.Scale2 for ihCol = 1:nCol.Scale2 [cov.Scale2(ihRow,ihCol),heff.Scale2(ihRow,ihCol)] = general_CovModel(theta, ihCol/(nCol.Scale2/nCol.Scale1), ihRow/(nRow.Scale2/nRow.Scale1), a_minor, a_major, sill/(nRow.Scale2*nCol.Scale2), 'Exp'); end end %# Scale-up of fine scale values by averaging [covAvg.Scale2,var_covAvg.Scale2,varNorm_covAvg.Scale2] = general_AverageProperty(nRow.Scale2/nRow.Scale1,nCol.Scale2/nCol.Scale1,1,nRow.Scale1,nCol.Scale1,1,cov.Scale2,1); I am using two functions, general_CovModel() and general_AverageProperty(), in my main script which are given as following: function [cov,h_eff] = general_CovModel(theta, hx, hy, a_minor, a_major, sill, mod_type) % mod_type should be in strings angle_rad = theta*(pi/180); % theta in degrees, angle_rad in radians R_theta = [sin(angle_rad) cos(angle_rad); -cos(angle_rad) sin(angle_rad)]; h = [hx; hy]; lambda = a_minor/a_major; D_lambda = [lambda 0; 0 1]; h_2prime = D_lambda*R_theta*h; h_eff = sqrt((h_2prime(1)^2)+(h_2prime(2)^2)); if strcmp(mod_type,'Sph')==1 || strcmp(mod_type,'sph') ==1 if h_eff<=a cov = sill - sill.*(1.5*(h_eff/a_minor)-0.5*((h_eff/a_minor)^3)); else cov = sill; end elseif strcmp(mod_type,'Exp')==1 || strcmp(mod_type,'exp') ==1 cov = sill-(sill.*(1-exp(-(3*h_eff)/a_minor))); elseif strcmp(mod_type,'Gauss')==1 || strcmp(mod_type,'gauss') ==1 cov = sill-(sill.*(1-exp(-((3*h_eff)^2/(a_minor^2))))); end and function [PropertyAvg,variance_PropertyAvg,NormVariance_PropertyAvg]=... general_AverageProperty(blocksize_row,blocksize_col,blocksize_t,... nUpscaledRow,nUpscaledCol,nUpscaledT,PropertyArray,omega) % This function computes average of a property and variance of that averaged % property using power averaging PropertyAvg=zeros(nUpscaledRow,nUpscaledCol,nUpscaledT); %# Average of property for k=1:nUpscaledT, for j=1:nUpscaledCol, for i=1:nUpscaledRow, sum=0; for a=1:blocksize_row, for b=1:blocksize_col, for c=1:blocksize_t, sum=sum+(PropertyArray((i-1)*blocksize_row+a,(j-1)*blocksize_col+b,(k-1)*blocksize_t+c).^omega); % add all the property values in 'blocksize_x','blocksize_y','blocksize_t' to one variable end end end PropertyAvg(i,j,k)=(sum/(blocksize_row*blocksize_col*blocksize_t)).^(1/omega); % take average of the summed property end end end %# Variance of averageed property variance_PropertyAvg=var(reshape(PropertyAvg,... nUpscaledRow*nUpscaledCol*nUpscaledT,1),1,1); %# Normalized variance of averageed property NormVariance_PropertyAvg=variance_PropertyAvg./(var(reshape(... PropertyArray,numel(PropertyArray),1),1,1)); Question: Using Matlab, I would like to optimize covAvg.Scale2 such that it matches closely with cov.Scale1 by perturbing/varying any (or all) of the following variables 1) a_minor 2) a_major 3) theta Thanks.

    Read the article

  • Preventing multiple repeat selection of synchronized Controls ?

    - by BillW
    The working code sample here synchronizes (single) selection in a TreeView, ListView, and ComboBox via the use of lambda expressions in a dictionary where the Key in the dictionary is a Control, and the Value of each Key is an Action<int. Where I am stuck is that I am getting multiple repetitions of execution of the code that sets the selection in the various controls in a way that's unexpected : it's not recursing : there's no StackOverFlow error happening; but, I would like to figure out why the current strategy for preventing multiple selection of the same controls is not working. Perhaps the real problem here is distinguishing between a selection update triggered by the end-user and a selection update triggered by the code that synchronizes the other controls ? Note: I've been experimenting with using Delegates, and forms of Delegates like Action<T>, to insert executable code in Dictionaries : I "learn best" by posing programming "challenges" to myself, and implementing them, as well as studying, at the same time, the "golden words" of luminaries like Skeet, McDonald, Liberty, Troelsen, Sells, Richter. Note: Appended to this question/code, for "deep background," is a statement of how I used to do things in pre C#3.0 days where it seemed like I did need to use explicit measures to prevent recursion when synchronizing selection. Code : Assume a WinForms standard TreeView, ListView, ComboBox, all with the same identical set of entries (i.e., the TreeView has only root nodes; the ListView, in Details View, has one Column). private Dictionary<Control, Action<int>> ControlToAction = new Dictionary<Control, Action<int>>(); private void Form1_Load(object sender, EventArgs e) { // add the Controls to be synchronized to the Dictionary // with appropriate Action<int> lambda expressions ControlToAction.Add(treeView1, (i => { treeView1.SelectedNode = treeView1.Nodes[i]; })); ControlToAction.Add(listView1, (i => { listView1.Items[i].Selected = true; })); ControlToAction.Add(comboBox1, (i => { comboBox1.SelectedIndex = i; })); } private void synchronizeSelection(int i, Control currentControl) { foreach(Control theControl in ControlToAction.Keys) { // skip the 'current control' if (theControl == currentControl) continue; // for debugging only Console.WriteLine(theControl.Name + " synchronized"); // execute the Action<int> associated with the Control ControlToAction[theControl](i); } } private void treeView1_AfterSelect(object sender, TreeViewEventArgs e) { synchronizeSelection(e.Node.Index, treeView1); } private void listView1_SelectedIndexChanged(object sender, EventArgs e) { // weed out ListView SelectedIndexChanged firing // with SelectedIndices having a Count of #0 if (listView1.SelectedIndices.Count > 0) { synchronizeSelection(listView1.SelectedIndices[0], listView1); } } private void comboBox1_SelectedValueChanged(object sender, EventArgs e) { if (comboBox1.SelectedIndex > -1) { synchronizeSelection(comboBox1.SelectedIndex, comboBox1); } } background : pre C# 3.0 Seems like, back in pre C# 3.0 days, I was always using a boolean flag to prevent recursion when multiple controls were updated. For example, I'd typically have code like this for synchronizing a TreeView and ListView : assuming each Item in the ListView was synchronized with a root-level node of the TreeView via a common index : // assume ListView is in 'Details View,' has a single column, // MultiSelect = false // FullRowSelect = true // HideSelection = false; // assume TreeView // HideSelection = false // FullRowSelect = true // form scoped variable private bool dontRecurse = false; private void treeView1_AfterSelect(object sender, TreeViewEventArgs e) { if(dontRecurse) return; dontRecurse = true; listView1.Items[e.Node.Index].Selected = true; dontRecurse = false; } private void listView1_SelectedIndexChanged(object sender, EventArgs e) { if(dontRecurse) return // weed out ListView SelectedIndexChanged firing // with SelectedIndices having a Count of #0 if (listView1.SelectedIndices.Count > 0) { dontRecurse = true; treeView1.SelectedNode = treeView1.Nodes[listView1.SelectedIndices[0]]; dontRecurse = false; } } Then it seems, somewhere around FrameWork 3~3.5, I could get rid of the code to suppress recursion, and there was was no recursion (at least not when synchronizing a TreeView and a ListView). By that time it had become a "habit" to use a boolean flag to prevent recursion, and that may have had to do with using a certain third party control.

    Read the article

  • Complex Rails queries across multiple tables, unions, and will_paginate. Solved.

    - by uberllama
    Hi folks. I've been working on a complex "user feed" type of functionality for a while now, and after experimenting with various union plugins, hacking named scopes, and brute force, have arrived at a solution I'm happy with. S.O. has been hugely helpful for me, so I thought I'd post it here in hopes that it might help others and also to get feedback -- it's very possible that I worked on this so long that I walked down an unnecessarily complicated road. For the sake of my example, I'll use users, groups, and articles. A user can follow other users to get a feed of their articles. They can also join groups and get a feed of articles that have been added to those groups. What I needed was a combined, pageable feed of distinct articles from a user's contacts and groups. Let's begin. user.rb has_many :articles has_many :contacts has_many :contacted_users, :through => :contacts has_many :memberships has_many :groups, :through => :memberships contact.rb belongs_to :user belongs_to :contacted_user, :class_name => "User", :foreign_key => "contacted_user_id" article.rb belongs_to :user has_many :submissions has_many :groups, :through => :submissions group.rb has_many :memberships has_many :users, :through => :memberships has_many :submissions has_many :articles, :through => :submissions Those are the basic models that define my relationships. Now, I add two named scopes to the Article model so that I can get separate feeds of both contact articles and group articles should I desire. article.rb # Get all articles by user's contacts named_scope :by_contacts, lambda {|user| {:joins => "inner join contacts on articles.user_id = contacts.contacted_user_id", :conditions => ["articles.published = 1 and contacts.user_id = ?", user.id]} } # Get all articles in user's groups. This does an additional query to get the user's group IDs, then uses those in an IN clause named_scope :by_groups, lambda {|user| {:select => "DISTINCT articles.*", :joins => :submissions, :conditions => {:submissions => {:group_id => user.group_ids}}} } Now I have to create a method that will provide a UNION of these two feeds into one. Since I'm using Rails 2.3.5, I have to use the construct_finder_sql method to render a scope into its base sql. In Rails 3.0, I could use the to_sql method. user.rb def feed "(#{Article.by_groups(self).send(:construct_finder_sql,{})}) UNION (#{Article.by_contacts(self).send(:construct_finder_sql,{})})" end And finally, I can now call this method and paginate it from my controller using will_paginate's paginate_by_sql method. HomeController.rb @articles = Article.paginate_by_sql(current_user.feed, :page => 1) And we're done! It may seem simple now, but it was a lot of work getting there. Feedback is always appreciated. In particular, it would be great to get away from some of the raw sql hacking. Cheers.

    Read the article

  • Dynamically loading modules in Python (+ multi processing question)

    - by morpheous
    I am writing a Python package which reads the list of modules (along with ancillary data) from a configuration file. I then want to iterate through each of the dynamically loaded modules and invoke a do_work() function in it which will spawn a new process, so that the code runs ASYNCHRONOUSLY in a separate process. At the moment, I am importing the list of all known modules at the beginning of my main script - this is a nasty hack I feel, and is not very flexible, as well as being a maintenance pain. This is the function that spawns the processes. I will like to modify it to dynamically load the module when it is encountered. The key in the dictionary is the name of the module containing the code: def do_work(work_info): for (worker, dataset) in work_info.items(): #import the module defined by variable worker here... # [Edit] NOT using threads anymore, want to spawn processes asynchronously here... #t = threading.Thread(target=worker.do_work, args=[dataset]) # I'll NOT dameonize since spawned children need to clean up on shutdown # Since the threads will be holding resources #t.daemon = True #t.start() Question 1 When I call the function in my script (as written above), I get the following error: AttributeError: 'str' object has no attribute 'do_work' Which makes sense, since the dictionary key is a string (name of the module to be imported). When I add the statement: import worker before spawning the thread, I get the error: ImportError: No module named worker This is strange, since the variable name rather than the value it holds are being used - when I print the variable, I get the value (as I expect) whats going on? Question 2 As I mentioned in the comments section, I realize that the do_work() function written in the spawned children needs to cleanup after itself. My understanding is to write a clean_up function that is called when do_work() has completed successfully, or an unhandled exception is caught - is there anything more I need to do to ensure resources don't leak or leave the OS in an unstable state? Question 3 If I comment out the t.daemon flag statement, will the code stil run ASYNCHRONOUSLY?. The work carried out by the spawned children are pretty intensive, and I don't want to have to be waiting for one child to finish before spawning another child. BTW, I am aware that threading in Python is in reality, a kind of time sharing/slicing - thats ok Lastly is there a better (more Pythonic) way of doing what I'm trying to do? [Edit] After reading a little more about Pythons GIL and the threading (ahem - hack) in Python, I think its best to use separate processes instead (at least IIUC, the script can take advantage of multiple processes if they are available), so I will be spawning new processes instead of threads. I have some sample code for spawning processes, but it is a bit trivial (using lambad functions). I would like to know how to expand it, so that it can deal with running functions in a loaded module (like I am doing above). This is a snippet of what I have: def do_mp_bench(): q = mp.Queue() # Not only thread safe, but "process safe" p1 = mp.Process(target=lambda: q.put(sum(range(10000000)))) p2 = mp.Process(target=lambda: q.put(sum(range(10000000)))) p1.start() p2.start() r1 = q.get() r2 = q.get() return r1 + r2 How may I modify this to process a dictionary of modules and run a do_work() function in each loaded module in a new process?

    Read the article

  • How to select table column names in a view and pass to controller in rails?

    - by zachd1_618
    So I am new to Rails, and OO programming in general. I have some grasp of the MVC architecture. My goal is to make a (nearly) completely dynamic plug-and-play plotting web server. I am fairly confused with params, forms, and select helpers. What I want to do is use Rails drop downs to basically pass parameters as strings to my controller, which will use the params to select certain column data from my database and plot it dynamically. I have the latter part of the task working, but I can't seem to pass values from my view to controller. For simplicity's sake, say my database schema looks like this: --------------Plot--------------- |____x____|____y1____|____y2____| | 1 | 1 | 1 | | 2 | 2 | 4 | | 3 | 3 | 9 | | 4 | 4 | 16 | | 5 | 5 | 25 | ... and in my Model, I have dynamic selector scopes that will let me select just certain columns of data: in Plot.rb class Plot < ActiveRecord::Base scope :select_var, lambda {|varname| select(varname)} scope :between_x, lambda {|x1,x2| where("x BETWEEN ? and ?","#{x1}","#{x2}")} So this way, I can call: irb>>@p1 = Plot.select_var(['x','y1']).between_x(1,3) and get in return a class where @p1.x and @p1.y1 are my only attributes, only for values between x=1 to x=4, which I dynamically plot. I want to start off in a view (plot/index), where I can dynamically select which variable names (table column names), and which rows from the database to fetch and plot. The problem is, most select helpers don't seem to work with columns in the database, only rows. So to select columns, I first get an array of column names that exist in my database with a function I wrote. Plots Controller def index d=Plot.first @tags = d.list_vars end So @tags = ['x','y1','y2'] Then in my plot/index.html.erb I try to use a drop down to select wich variables I send back to the controller. index.html.erb <%= select_tag( :variable, options_for_select(@plots.first.list_vars,:name,:multiple=>:true) )%> <%= button_to 'Plot now!', :controller =>"plots/plot_vars", :variable => params[:variable]%> Finally, in the controller again Plots controller ... def plot_vars @plot_data=Plot.select_vars([params[:variable]]) end The problem is everytime I try this (or one of a hundred variations thereof), the params[:variable] is nill. How can I use a drop down to pass a parameter with string variable names to the controller? Sorry its so long, I have been struggling with this for about a month now. :-( I think my biggest problem is that this setup doesn't really match the Rails architecture. I don't have "users" and "articles" as individual entities. I really have a data structure, not a data object. Trying to work with the structure in terms of data object speak is not necessarily the easiest thing to do I think. For background: My actual database has about 250 columns and a couple million rows, and they get changed and modified from time to time. I know I can make the database smarter, but its not worth it on my end. I work at a scientific institute where there are a ton of projects with databases just like this. Each one has a web developer that spends months setting up a web interface and their own janky plotting setups. I want to make this completely dynamic, as a plug-and-play solution so all you have to do is specify your database connection, and this rails setup will automatically show and plot which data you want in it. I am more of a sequential programmer and number cruncher, as are many people here. I think this project could be very helpful in the end, but its difficult to figure out for me right now.

    Read the article

  • Employee Info Starter Kit - Visual Studio 2010 and .NET 4.0 Version (4.0.0) Available

    - by joycsharp
    Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves all major problems in web development architectural space.  This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintanable web applications quickly and easily. Since it's first release, this starter kit achieved a huge popularity in web developer community and includes 1,40,000+ download from project web site. Visual Studio 2010 and .NET 4.0 came up with lots of exciting features to make software developers life easier.  A new version (v4.0.0) of Employee Info Starter Kit is now available in both MSDN Code Gallery and CodePlex. Chckout the latest version of this starter kit to enjoy cool features available in Visual Studio 2010 and .NET 4.0. [ Release Notes ] Architectural Overview Simple 2 layer architecture (user interface and data access layer) with 1 optional cache layer ASP.NET Web Form based user interface Custom Entity Data Container implemented (with primitive C# types for data fields) Active Record Design Pattern based Data Access Layer, implemented in C# and Entity Framework 4.0 Sql Server Stored Procedure to perform actual CRUD operation Standard infrastructure (architecture, helper utility) for automated integration (bottom up manner) and unit testing Technology UtilizedProgramming Languages/Scripts Browser side: JavaScript Web server side: C# 4.0 Database server side: T-SQL .NET Framework Components .NET 4.0 Entity Framework .NET 4.0 Optional/Named Parameters .NET 4.0 Tuple .NET 3.0+ Extension Method .NET 3.0+ Lambda Expressions .NET 3.0+ Aanonymous Type .NET 3.0+ Query Expressions .NET 3.0+ Automatically Implemented Properties .NET 3.0+ LINQ .NET 2.0 + Partial Classes .NET 2.0 + Generic Type .NET 2.0 + Nullable Type   ASP.NET 3.5+ List View (TBD) ASP.NET 3.5+ Data Pager (TBD) ASP.NET 2.0+ Grid View ASP.NET 2.0+ Form View ASP.NET 2.0+ Skin ASP.NET 2.0+ Theme ASP.NET 2.0+ Master Page ASP.NET 2.0+ Object Data Source ASP.NET 1.0+ Role Based Security Visual Studio Features Visual Studio 2010 CodedUI Test Visual Studio 2010 Layer Diagram Visual Studio 2010 Sequence Diagram Visual Studio 2010 Directed Graph Visual Studio 2005+ Database Unit Test Visual Studio 2005+ Unit Test Visual Studio 2005+ Web Test Visual Studio 2005+ Load Test Sql Server Features Sql Server 2005 Stored Procedure Sql Server 2005 Xml type Sql Server 2005 Paging support

    Read the article

  • Employee Info Starter Kit - Visual Studio 2010 and .NET 4.0 Version (4.0.0) Available

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves most of the major problems in web development architectural space.  This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintanable web applications quickly and easily. Since it's first release, this starter kit achieved a huge popularity in web developer community and includes 1,40,000+ download from project web site. Visual Studio 2010 and .NET 4.0 came up with lots of exciting features to make software developers life easier.  A new version (v4.0.0) of Employee Info Starter Kit is now available in both MSDN Code Gallery and CodePlex. Chckout the latest version of this starter kit to enjoy cool features available in Visual Studio 2010 and .NET 4.0. [ Release Notes ] Architectural Overview Simple 2 layer architecture (user interface and data access layer) with 1 optional cache layer ASP.NET Web Form based user interface Custom Entity Data Container implemented (with primitive C# types for data fields) Active Record Design Pattern based Data Access Layer, implemented in C# and Entity Framework 4.0 Sql Server Stored Procedure to perform actual CRUD operation Standard infrastructure (architecture, helper utility) for automated integration (bottom up manner) and unit testing Technology UtilizedProgramming Languages/Scripts Browser side: JavaScript Web server side: C# 4.0 Database server side: T-SQL .NET Framework Components .NET 4.0 Entity Framework .NET 4.0 Optional/Named Parameters .NET 4.0 Tuple .NET 3.0+ Extension Method .NET 3.0+ Lambda Expressions .NET 3.0+ Aanonymous Type .NET 3.0+ Query Expressions .NET 3.0+ Automatically Implemented Properties .NET 3.0+ LINQ .NET 2.0 + Partial Classes .NET 2.0 + Generic Type .NET 2.0 + Nullable Type   ASP.NET 3.5+ List View (TBD) ASP.NET 3.5+ Data Pager (TBD) ASP.NET 2.0+ Grid View ASP.NET 2.0+ Form View ASP.NET 2.0+ Skin ASP.NET 2.0+ Theme ASP.NET 2.0+ Master Page ASP.NET 2.0+ Object Data Source ASP.NET 1.0+ Role Based Security Visual Studio Features Visual Studio 2010 CodedUI Test Visual Studio 2010 Layer Diagram Visual Studio 2010 Sequence Diagram Visual Studio 2010 Directed Graph Visual Studio 2005+ Database Unit Test Visual Studio 2005+ Unit Test Visual Studio 2005+ Web Test Visual Studio 2005+ Load Test Sql Server Features Sql Server 2005 Stored Procedure Sql Server 2005 Xml type Sql Server 2005 Paging support

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

  • What's brewing in the world of Java? (Dec 22nd 2010)

    - by Jacob Lehrbaum
    The nights are getting darker, the email traffic seems to be getting lighter and the holiday season feels like its right around the corner - but the world of Java is still as active as ever and shows no signs of taking a break!  Let's take a look at everything that has been brewing over the past couple of weeks:Product Updates and ResourcesJCP Approves JSRs for Java SE 7, Java SE 8, Project Coin and Lambda (read more)Java SE Update 23 Released, delivers improved performance and enhanced support for right-left languages. (read more or download)New Tutorial: JDK 7 Support in NetBeans IDE 7.0Java EE 6 and Glassfish 3.0 have celebrated their respective one year anniversaries!  (read more) So naturally, it's time to start talking about Java EE 7 (read more)WebcastsOn Demand: Developing Rich Clients for the Enterprise with the JavaFX Composer, Part 1Coming soon: Smarter Devices with Oracle's Embedded Java SolutionsPodcastsJava Spotlight Podcast Episode 7: Interview with Adam Messinger, Vice President of Java Development on Java One Brazil, Java SE Development, OpenJDK, JavaFX 2.0 and more!  The NetBeans team released Episode 53 of the NetBeans Podcast series on December 3rd marking the first episode in nearly 12 months.  Sign of things to come?Community and EventsJavaOne was held for the first time in Brazil this year, and by all accounts it was a great success!  Read more about this exciting first in the following posts from Tori Wieldt (JavaOne Latin America Underway) and Janice Heiss (JavaOne in Brazil)JavaOne was also held in Bejing for the first time last week and was also a huge success. Will try to include coverage of this event in the near futureArticles and InterviewsAn update on JavaServer Faces with Oracle's Ed Burns (read more)Interview with Java Champion Matjaz B. Juric on Cloud Computing, SOA, and Java EE 6 (read more)The 2010 JavaOne Java EE 6 Panel: Where We Are and Where We're Going (read more)Oracle MagazineThe latest issue of Oracle Magazine is up and in what will hopefully be a sign of the future, it includes a number of columns and articles on Java.  First is an editorial from Editor-in-Chief Tom Haunert who shares some insight into the long-standing relationship that Oracle has had with Java. Next up is a Oracle Technology Network Chief Justin Kestelyn's Community Bulletin entitled: Java Evolves.  And finally, Java Champion Adam Bien's feature on Java EE 6: Simplicity by DesignEnjoy!

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >